id
stringlengths 6
113
| author
stringlengths 2
36
| task_category
stringclasses 39
values | tags
listlengths 1
4.05k
| created_time
int64 1,646B
1,742B
| last_modified
timestamp[s]date 2020-05-14 13:13:12
2025-03-18 10:01:09
| downloads
int64 0
118M
| likes
int64 0
4.86k
| README
stringlengths 30
1.01M
| matched_task
listlengths 1
10
| is_bionlp
stringclasses 3
values |
---|---|---|---|---|---|---|---|---|---|---|
RichardErkhov/artificialguybr_-_Gemma2-2B-OpenHermes2.5-4bits
|
RichardErkhov
| null |
[
"safetensors",
"gemma2",
"4-bit",
"bitsandbytes",
"region:us"
] | 1,736,580,272,000 | 2025-01-11T07:25:38 | 5 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Gemma2-2B-OpenHermes2.5 - bnb 4bits
- Model creator: https://huggingface.co/artificialguybr/
- Original model: https://huggingface.co/artificialguybr/Gemma2-2B-OpenHermes2.5/
Original model description:
---
tags:
- GEMMA
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
model-index:
- name: google/gemma-2-2b
results: []
license: apache-2.0
language:
- en
library_name: transformers
datasets:
- teknium/OpenHermes-2.5
---
# Model Card for GEMMA2-2B-openhermes-2.5
This model is a fine-tuned version of Gemma 2 -2B on the OpenHermes-2.5 dataset.
## Model Details
### Model Description
This is a fine-tuned version of the google/gemma-2-2b model, trained on the OpenHermes-2.5 dataset. It is designed for instruction following and general language tasks.
- **Developed by:** artificialguybr
- **Model type:** Causal Language Model
- **Language(s):** English
- **License:** apache-2.0
- **Finetuned from model:** google/gemma-2-2b
### Model Sources
- **Repository:** https://huggingface.co/artificialguybr/Gemma2-2B-OpenHermes2.5
## Uses
This model can be used for various natural language processing tasks, particularly those involving instruction following and general language understanding.
### Direct Use
The model can be used for tasks such as text generation, question answering, and other language-related applications.
### Out-of-Scope Use
The model should not be used for generating harmful or biased content. Users should be aware of potential biases in the training data.
## Training Details
### Training Data
The model was fine-tuned on the teknium/OpenHermes-2.5 dataset.
### Training Procedure
#### Hardware and Software
- **Hardware:** NVIDIA A100-SXM4-80GB (1 GPU)
- **Software Framework:** 🤗 Transformers, Axolotl
## Limitations and Biases
More information is needed about specific limitations and biases of this model.
|
[
"QUESTION_ANSWERING"
] |
Non_BioNLP
|
odunola/distillbert-distilled-ag-news-2
|
odunola
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:ag_news",
"base_model:google/bert_uncased_L-8_H-256_A-4",
"base_model:finetune:google/bert_uncased_L-8_H-256_A-4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,699,912,230,000 | 2023-11-14T16:30:47 | 6 | 0 |
---
base_model: google/bert_uncased_L-8_H-256_A-4
datasets:
- ag_news
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distillbert-distilled-ag-news-2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: ag_news
type: ag_news
config: default
split: train
args: default
metrics:
- type: accuracy
value: 0.9407916666666667
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distillbert-distilled-ag-news-2
This model is a fine-tuned version of [google/bert_uncased_L-8_H-256_A-4](https://huggingface.co/google/bert_uncased_L-8_H-256_A-4) on the ag_news dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1945
- Accuracy: 0.9408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.238 | 1.0 | 3000 | 0.2240 | 0.9237 |
| 0.1873 | 2.0 | 6000 | 0.2009 | 0.9329 |
| 0.1597 | 3.0 | 9000 | 0.1919 | 0.9377 |
| 0.1495 | 4.0 | 12000 | 0.1948 | 0.9400 |
| 0.1303 | 5.0 | 15000 | 0.1945 | 0.9408 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
cvapict/yhi-message-topic-all-MiniLM-L12-v2
|
cvapict
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,695,211,368,000 | 2023-09-20T21:50:30 | 23 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# cvapict/yhi-message-type-v2-all-MiniLM-L12-v2
{'accuracy': 0.8269230769230769}
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("cvapict/yhi-message-type-v2-all-MiniLM-L12-v2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
FabsCool/autotrain-T5Base1_1-728922203
|
FabsCool
|
text2text-generation
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"unk",
"dataset:FabsCool/autotrain-data-T5Base1_1",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,649,657,953,000 | 2022-04-11T10:31:58 | 116 | 0 |
---
datasets:
- FabsCool/autotrain-data-T5Base1_1
language: unk
tags:
- a
- u
- t
- o
- r
- i
- n
widget:
- text: I love AutoTrain 🤗
co2_eq_emissions: 583.728921803621
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 728922203
- CO2 Emissions (in grams): 583.728921803621
## Validation Metrics
- Loss: 1.2922444343566895
- Rouge1: 54.3928
- Rouge2: 31.666
- RougeL: 50.3552
- RougeLsum: 50.3694
- Gen Len: 13.3425
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/FabsCool/autotrain-T5Base1_1-728922203
```
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
udrearobert999/multi-qa-mpnet-base-cos-v1-test
|
udrearobert999
|
text-classification
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/multi-qa-mpnet-base-cos-v1",
"base_model:finetune:sentence-transformers/multi-qa-mpnet-base-cos-v1",
"model-index",
"region:us"
] | 1,714,648,327,000 | 2024-05-02T14:36:36 | 5 | 2 |
---
base_model: sentence-transformers/multi-qa-mpnet-base-cos-v1
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: in durankulak near varna is another important example other signs of early
metals are found from the third millennium bc in palmela portugal los millares
spain and stonehenge united kingdom the precise beginnings however have not be
clearly ascertained and new discoveries are both continuous and ongoing in tamilnadu
in approximately 1900 bc ancient iron smelting sites were functioning in tamil
nadu in the near east about 3500 bc it was discovered that by combining copper
and tin a superior metal could be made an alloy called bronze this represented
a major technological shift known as the bronze age the extraction of iron from
its ore into a workable metal is much more difficult than for copper or tin the
process appears to have been invented by the hittites in about 1200 bc beginning
the iron age the secret of extracting and working iron was a key factor in the
success of the philistineshistorical developments in ferrous metallurgy can be
found in a wide variety of past cultures and civilizations this includes the ancient
and medieval kingdoms and empires of the middle east and near east ancient iran
ancient egypt ancient nubia and anatolia in presentday turkey ancient nok carthage
the greeks and romans of ancient europe medieval europe ancient and medieval china
ancient and medieval india ancient and medieval japan amongst others many applications
practices and devices associated or involved in metallurgy were established in
ancient china such as the innovation of the blast furnace cast iron hydraulicpowered
trip hammers and double acting piston bellowsa 16th century book by georg agricola
de re metallica describes the highly developed and complex processes of mining
metal ores metal extraction and metallurgy of the time agricola has been described
as the father of metallurgy extractive metallurgy is the practice of removing
valuable metals from an ore and refining the extracted raw metals into a purer
form in order to convert a metal oxide or sulphide to a purer metal the ore must
be reduced physically chemically or electrolytically extractive metallurgists
are interested in three primary streams feed concentrate metal oxidesulphide and
tailings waste after mining large pieces of the ore feed are broken through crushing
or grinding in order to obtain particles small enough where each particle is either
mostly valuable or mostly waste concentrating the particles of value in a form
supporting separation enables the desired metal to be removed from waste products
mining may not be necessary if the ore body and physical environment are conducive
to leaching leaching dissolves minerals in an ore body and results in an enriched
solution the solution is collected and processed to extract valuable metals ore
- text: '##rch procedure that evaluates the objective function p x displaystyle pmathbf
x on a grid of candidate source locations g displaystyle mathcal g to estimate
the spatial location of the sound source x s displaystyle textbf xs as the point
of the grid that provides the maximum srp modifications of the classical srpphat
algorithm have been proposed to reduce the computational cost of the gridsearch
step of the algorithm and to increase the robustness of the method in the classical
srpphat for each microphone pair and for each point of the grid a unique integer
tdoa value is selected to be the acoustic delay corresponding to that grid point
this procedure does not guarantee that all tdoas are associated to points on the
grid nor that the spatial grid is consistent since some of the points may not
correspond to an intersection of hyperboloids this issue becomes more problematic
with coarse grids since when the number of points is reduced part of the tdoa
information gets lost because most delays are not anymore associated to any point
in the grid the modified srpphat collects and uses the tdoa information related
to the volume surrounding each spatial point of the search grid by considering
a modified objective function where l m 1 m 2 l x displaystyle lm1m2lmathbf x
and l m 1 m 2 u x displaystyle lm1m2umathbf x are the lower and upper accumulation
limits of gcc delays which depend on the spatial location x displaystyle mathbf
x the accumulation limits can be calculated beforehand in an exact way by exploring
the boundaries separating the regions corresponding to the points of the grid
alternatively they can be selected by considering the spatial gradient of the
tdoa ∇ τ m 1 m 2 x ∇ x τ m 1 m 2 x ∇ y τ m 1 m 2 x ∇ z τ m 1 m 2 x t displaystyle
nabla tau m1m2mathbf x nabla xtau m1m2mathbf x nabla ytau m1m2mathbf x nabla ztau
m1m2mathbf x t where each component γ ∈ x y z displaystyle gamma in leftxyzright
of the gradient is for a rectangular grid where neighboring points are separated
a distance r displaystyle r the lower and upper accumulation limits are given
by where d r 2 min 1 sin θ cos [UNK] 1 sin θ sin [UNK] 1 cos θ displaystyle dr2min
leftfrac 1vert sintheta cosphi vert frac 1vert sintheta sinphi vert frac 1vert'
- text: authority to select projects and mandated new metropolitan planning initiatives
for the first time state transportation officials were required to consult seriously
with local representatives on mpo governing boards regarding matters of project
prioritization and decisionmaking these changes had their roots in the need to
address increasingly difficult transportation problems — in particular the more
complicated patterns of traffic congestion that arose with the suburban development
boom in the previous decades many recognized that the problems could only be addressed
effectively through a stronger federal commitment to regional planning the legislation
that emerged the intermodal surface transportation efficiency act istea was signed
into federal law by president george h w bush in december 1991 it focused on improving
transportation not as an end in itself but as the means to achieve important national
goals including economic progress cleaner air energy conservation and social equity
istea promoted a transportation system in which different modes and facilities
— highway transit pedestrian bicycle aviation and marine — were integrated to
allow a seamless movement of both goods and people new funding programs provided
greater flexibility in the use of funds particularly regarding using previously
restricted highway funds for transit development improved intermodal connections
and emphasized upgrades to existing facilities over building new capacity — particularly
roadway capacity to accomplish more serious metropolitan planning istea doubled
federal funding for mpo operations and required the agencies to evaluate a variety
of multimodal solutions to roadway congestion and other transportation problems
mpos also were required to broaden public participation in the planning process
and to see that investment decisions contributed to meeting the air quality standards
of the clean air act amendments in addition istea placed a new requirement on
mpos to conduct fiscally constrained planning and ensure that longrange transportation
plans and shortterm transportation improvement programs were fiscally constrained
in other words adopted plans and programs can not include more projects than reasonably
can be expected to be funded through existing or projected sources of revenues
this new requirement represented a major conceptual shift for many mpos and others
in the planning community since the imposition of fiscal discipline on plans now
required not only understanding how much money might be available but how to prioritize
investment needs and make difficult choices among competing needs adding to this
complexity is the need to plan across transportation modes and develop approaches
for multimodal investment prioritization and decision making it is in this context
of greater prominence funding and requirements that mpos function today an annual
element is composed of transportation improvement projects contained in an areas
transportation improvement program tip which is proposed for implementation during
the current year the annual element is submitted to the us department of transportation
as part of the required planning process the passage of safe accountable flexible
efficient transportation equity act a legacy for users safetealu
- text: '##pignygiroux served as an assistant professor from 1997 2003 associate professor
from 2003 2014 chair of the department of geography from 2015 2018 and professor
beginning in 2014 with secondary appointments in department of geology the college
of education social services and rubenstein school of environment natural resources
she teaches courses in meteorology climatology physical geography remote sensing
and landsurface processes in her work as state climatologist for vermont dupignygiroux
uses her expertise hydrology and extreme weather such as floods droughts and storms
to keep the residents of vermont informed on how climate change will affect their
homes health and livelihoods she assists other state agencies in preparing for
and adapting to current and future impacts of climate change on vermonts transportation
system emergency management planning and agriculture and forestry industries for
example she has published analyses of the impacts of climate change on the health
of vermonts sugar maples a hardwood species of key economic and cultural importance
to the state as cochair of vermonts state ’ s drought task force she played a
key role in developing the 2018 vermont state hazard mitigation plandupignygiroux
served as secretary for the american association of state climatologists from
20102011 and president elect from 20192020 in june 2020 she was elected as president
of the american association of state climatologists which is a twoyear term in
addition to her research on climate change dupignygiroux is known for her efforts
to research and promote climate literacy climate literacy is an understanding
of the influences of and influences on the climate system including how people
change the climate how climate metrics are observed and modelled and how climate
change affects society “ being climate literate is more critical than ever before
” lesleyann dupignygiroux stated for a 2020 article on climate literacy “ if we
do not understand weather climate and climate change as intricate and interconnected
systems then our appreciation of the big picture is lost ” dupignygiroux is known
for her climate literacy work with elementary and high school teachers and students
she cofounded the satellites weather and climate swac project in 2008 which is
a professional development program for k12 teachers designed to promote climate
literacy and interest in the stem science technology engineering and mathematics
careers dupignygiroux is also a founding member of the climate literacy and energy
awareness network clean formerly climate literacy network a communitybased effort
to support climate literacy and communication in a 2016 interview dupignygiroux
stated “ sharing knowledge and giving back to my community are my two axioms in
life watching students mature and flourish in'
- text: no solutions to x n y n z n displaystyle xnynzn for all n ≥ 3 displaystyle
ngeq 3 this claim appears in his annotations in the margins of his copy of diophantus
euler the interest of leonhard euler 1707 – 1783 in number theory was first spurred
in 1729 when a friend of his the amateur goldbach pointed him towards some of
fermats work on the subject this has been called the rebirth of modern number
theory after fermats relative lack of success in getting his contemporaries attention
for the subject eulers work on number theory includes the following proofs for
fermats statements this includes fermats little theorem generalised by euler to
nonprime moduli the fact that p x 2 y 2 displaystyle px2y2 if and only if p ≡
1 mod 4 displaystyle pequiv 1bmod 4 initial work towards a proof that every integer
is the sum of four squares the first complete proof is by josephlouis lagrange
1770 soon improved by euler himself the lack of nonzero integer solutions to x
4 y 4 z 2 displaystyle x4y4z2 implying the case n4 of fermats last theorem the
case n3 of which euler also proved by a related method pells equation first misnamed
by euler he wrote on the link between continued fractions and pells equation first
steps towards analytic number theory in his work of sums of four squares partitions
pentagonal numbers and the distribution of prime numbers euler pioneered the use
of what can be seen as analysis in particular infinite series in number theory
since he lived before the development of complex analysis most of his work is
restricted to the formal manipulation of power series he did however do some very
notable though not fully rigorous early work on what would later be called the
riemann zeta function quadratic forms following fermats lead euler did further
research on the question of which primes can be expressed in the form x 2 n y
2 displaystyle x2ny2 some of it prefiguring quadratic reciprocity diophantine
equations euler worked on some diophantine equations of genus 0 and 1 in particular
he studied diophantuss work he tried to systematise it but the time was not yet
ripe for such an endeavour — algebraic geometry was still in its infancy he did
notice there was a connection between diophantine problems and elliptic integrals
whose study he had himself initiated lagrange legendre and gauss josephlouis
inference: true
model-index:
- name: SetFit with sentence-transformers/multi-qa-mpnet-base-cos-v1
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.6908674054260604
name: Accuracy
---
# SetFit with sentence-transformers/multi-qa-mpnet-base-cos-v1
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/multi-qa-mpnet-base-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-cos-v1) as the Sentence Transformer embedding model. A [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/multi-qa-mpnet-base-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-cos-v1)
- **Classification head:** a [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 43 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 20 | <ul><li>'##les approach which combined geography history and the sociological approaches of the annee sociologique many members of which were their colleagues at strasbourg to produce an approach which rejected the predominant emphasis on politics diplomacy and war of many 19th and early 20thcentury historians as spearheaded by historians whom febvre called les sorbonnistes instead they pioneered an approach to a study of longterm historical structures la longue duree over events and political transformations geography material culture and what later annalistes called mentalites or the psychology of the epoch are also characteristic areas of study the goal of the annales was to undo the work of the sorbonnistes to turn french historians away from the narrowly political and diplomatic toward the new vistas in social and economic historycofounder marc bloch 1886 – 1944 was a quintessential modernist who studied at the elite ecole normale superieure and in germany serving as a professor at the university of strasbourg until he was called to the sorbonne in paris in 1936 as professor of economic history blochs interests were highly interdisciplinary influenced by the geography of paul vidal de la blache 1845 – 1918 and the sociology of emile durkheim 1858 – 1917 his own ideas especially those expressed in his masterworks french rural history les caracteres originaux de lhistoire rurale francaise 1931 and feudal society were incorporated by the secondgeneration annalistes led by fernand braudel georges duby a leader of the school wrote that the history he taught relegated the sensational to the sidelines and was reluctant to give a simple accounting of events but strove on the contrary to pose and solve problems and neglecting surface disturbances to observe the long and mediumterm evolution of economy society and civilisationthe annalistes especially lucien febvre advocated a histoire totale or histoire tout court a complete study of a historic problem bloch was shot by the gestapo during the german occupation of france in world war ii for his active membership of the french resistance and febvre carried on the annales approach in the 1940s and 1950s it was during this time that he mentored braudel who would become one of the bestknown exponents of this school braudels work came to define a second era of annales historiography and was very influential throughout the 1960s and 1970s especially for his work on the mediterranean region in the era of philip ii of spain braudel developed the idea often associated with annalistes of different modes of historical time lhistoire quasi immobile the quasi motionless history of historical'</li><li>'is important because the persuasiveness of a source usually depends upon its history primary sources may include cases constitutions statutes administrative regulations and other sources of binding legal authority while secondary legal sources may include books the headnotes of case reports articles and encyclopedias legal writers usually prefer to cite primary sources because only primary sources are authoritative and precedential while secondary sources are only persuasive at best family history a secondary source is a record or statement of an event or circumstance made by a noneyewitness or by someone not closely connected with the event or circumstances recorded or stated verbally either at or sometime after the event or by an eyewitness at a time after the event when the fallibility of memory is an important factor consequently according to this definition a firsthand account written long after the event when the fallibility of memory is an important factor is a secondary source even though it may be the first published description of that event autobiographies an autobiography can be a secondary source in history or the humanities when used for information about topics other than its subject for example many firsthand accounts of events in world war i written in the postwar years were influenced by the then prevailing perception of the war which was significantly different from contemporary opinion original research jules r benjamin a students guide to history 2013 isbn 9781457621444 edward h carr what is history basingstoke palgrave 2001 isbn 9780333977019 wood gray historians handbook a key to the study and writing of history prospect heights il waveland press 1991 ©1964 isbn 9780881336269 derek harland a basic course in genealogy volume two research procedure and evaluation of evidence bookcraft inc 1958 worldcat record richard holmes tommy harpercollins 2004 isbn 9780007137510 martha c howell and walter prevenier from reliable sources an introduction to historical methods 2001 isbn 9780801435737 richard a marius and melvin e page a short guide to writing about history 8th edition 2012 isbn 9780205118601 hayden white metahistory the historical imagination in nineteenthcentury europe baltimore johns hopkins university press 1973 isbn 9780801814693'</li><li>'have a meticulous approach to reconstructing the costumes or material culture of past eras but who are perceived to lack much understanding of the cultural values and historical contexts of the periods in question a college or society of antiquaries was founded in london in c 1586 to debate matters of antiquarian interest members included william camden sir robert cotton john stow william lambarde richard carew and others this body existed until 1604 when it fell under suspicion of being political in its aims and was abolished by king james i papers read at their meetings are preserved in cottons collections and were printed by thomas hearne in 1720 under the title a collection of curious discourses a second edition appearing in 1771 in 1707 a number of english antiquaries began to hold regular meetings for the discussion of their hobby and in 1717 the society of antiquaries was formally reconstituted finally receiving a charter from king george ii in 1751 in 1780 king george iii granted the society apartments in somerset house and in 1874 it moved into its present accommodation in burlington house piccadilly the society was governed by a council of twenty and a president who is ex officio a trustee of the british museum the society of antiquaries of scotland was founded in 1780 and had the management of a large national antiquarian museum in edinburgh the society of antiquaries of newcastle upon tyne the oldest provincial antiquarian society in england was founded in 1813 in ireland a society was founded in 1849 called the kilkenny archaeological society holding its meetings at kilkenny in 1869 its name was changed to the royal historical and archaeological association of ireland and in 1890 to the royal society of antiquaries of ireland its office being transferred to dublin in france the societe des antiquaires de france was formed in 1813 by the reconstruction of the academie celtique which had existed since 1804 the american antiquarian society was founded in 1812 with its headquarters at worcester massachusetts in modern times its library has grown to over 4 million items and as an institution it is internationally recognized as a repository and research library for early pre1876 american printed materials in denmark the kongelige nordiske oldskriftselskab also known as la societe royale des antiquaires du nord or the royal society of northern antiquaries was founded at copenhagen in 1825 in germany the gesamtverein der deutschen geschichts und altertumsvereine was founded in 1852in addition a number of local historical and archaeological societies have adopted the word antiquarian in their titles these have included the cambridge antiquarian society'</li></ul> |
| 42 | <ul><li>'been described as the worlds largest repository of covid19 sequences and by far the worlds largest database of sarscov2 sequences by midapril 2021 gisaids sarscov2 database reached over 1200000 submissions a testament to the hard work of researchers in over 170 different countries only three months later the number of uploaded sarscov2 sequences had doubled again to over 24 million by late 2021 the database contained over 5 million genome sequences as of december 2021 over 6 million sequences had been submitted by april 2022 there were 10 million sequences accumulated and in january 2023 the number had reached 144 millionin january 2020 the sarscov2 genetic sequence data was shared through gisaid throughout the first year of the covid19 pandemic most of the sarscov2 wholegenome sequences that were generated and shared globally were submitted through gisaid when the sarscov2 omicron variant was detected in south africa by quickly uploading the sequence to gisaid the national institute for communicable diseases there was able to learn that botswana and hong kong had also reported cases possessing the same gene sequencein march 2023 gisaid temporarily suspended database access for some scientists removing raw data relevant to investigations of the origins of sarscov2 gisaid stated that they do not delete records from their database but data may become temporarily invisible during updates or corrections availability of the data was restored with an additional restriction that any analysis based thereon would not be shared with the public the board of friends of gisaid consists of peter bogner and two german lawyers who are not involved in the daytoday operations of the organisation scientific advice to the organization is provided by its scientific advisory council including directors of leading public health laboratories such as who collaborating centres for influenza in 2023 gisaids lack of transparency was criticized by some gisaid funders including the european commission and the rockefeller foundation with longterm funding being denied from international federation of pharmaceutical manufacturers and associations ifpma in june 2023 it was reported in vanity fair that bogner had said that gisaid will soon launch an independent compliance board responsible for addressing a wide range of governance matters the telegraph similarly reported that gisaids inhouse counsel was developing new governance processes intended to be transparent and allow for the resolution of scientific disputes without the involvement of bogner the creation of the gisaid database was motivated in part by concerns raised by researchers from developing countries with scientific american noting in 2009 that that a previous datasharing system run by who forced them to give up intellectual'</li><li>'viruses can be named based on the antibodies they react with the use of the antibodies which were once exclusively derived from the serum blood fluid of animals is called serology once an antibody – reaction has taken place in a test other methods are needed to confirm this older methods included complement fixation tests hemagglutination inhibition and virus neutralisation newer methods use enzyme immunoassays eiain the years before pcr was invented immunofluorescence was used to quickly confirm viral infections it is an infectivity assay that is virus species specific because antibodies are used the antibodies are tagged with a dye that is luminescencent and when using an optical microscope with a modified light source infected cells glow in the dark pcr is a mainstay method for detecting viruses in all species including plants and animals it works by detecting traces of virus specific rna or dna it is very sensitive and specific but can be easily compromised by contamination most of the tests used in veterinary virology and medical virology are based on pcr or similar methods such as transcription mediated amplification when a novel virus emerges such as the covid coronavirus a specific test can be devised quickly so long as the viral genome has been sequenced and unique regions of the viral dna or rna identified the invention of microfluidic tests as allowed for most of these tests to be automated despite its specificity and sensitivity pcr has a disadvantage in that it does not differentiate infectious and noninfectious viruses and tests of cure have to be delayed for up to 21 days to allow for residual viral nucleic acid to clear from the site of the infection in laboratories many of the diagnostic test for detecting viruses are nucleic acid amplification methods such as pcr some tests detect the viruses or their components as these include electron microscopy and enzymeimmunoassays the socalled home or selftesting gadgets are usually lateral flow tests which detect the virus using a tagged monoclonal antibody these are also used in agriculture food and environmental sciences counting viruses quantitation has always had an important role in virology and has become central to the control of some infections of humans where the viral load is measured there are two basic methods those that count the fully infective virus particles which are called infectivity assays and those that count all the particles including the defective ones infectivity assays measure the amount concentration of infective viruses in a sample of known volume for host cells plants or cultures of bacterial or animal cells are used laboratory animals such as mice'</li><li>'vpx is a virionassociated protein encoded by human immunodeficiency virus type 2 hiv2 and most simian immunodeficiency virus siv strains but that is absent from hiv1 it is similar in structure to the protein vpr that is carried by siv and hiv2 as well as hiv1 vpx is one of five accessory proteins vif vpx vpr vpu and nef carried by lentiviruses that enhances viral replication by inhibiting host antiviral factorsvpx enhances hiv2 replication in humans by counteracting the host factor samhd1 samhd1 is a host factor found in human myeloid cells such as dendritic cells and macrophages that restricts hiv1 replication by depleting the cytoplasmic pool of deoxynucleoside triphosphates needed for viral dna production samhd1 does not however restrict hiv2 replication in myeloid cells due to the presence of viral vpx vpx counteracts restriction by inducing the ubiquitinproteasomedependent degradation of samhd1 vpxmediated degradation of samhd1 therefore decreases deoxynucleoside triphosphate hydrolysis thereby increasing the availability of dntps for viral reverse transcription in the cytoplasm it has been postulated that samhd1 degradation is required for hiv2 replication because the hiv2 reverse transcriptase rt is less active than the hiv1 rt which would be the reason for the absence of vpx from hiv1 because vpx is required for hiv2 reverse transcription and the early stages of the viral life cycle it is packaged into virions in significant amountsvpx is also involved in the nuclear import of the hiv2siv genomes and associated proteins but the specific mechanisms and interactions are currently unknown although vpr and vpx are similar in size both are 100 amino acids with 2025 sequence similarity and structure both are predicted to have similar tertiary structure with three major helices they serve very different roles in viral replication vpx targets a host restriction factor for proteasomal degradation while vpr arrests the host cell cycle in the g2 phase however they are both involved in the import of the viral preintegration complex into the host nucleus'</li></ul> |
| 19 | <ul><li>'##es insulin blood glucose from the portal vein enters liver cells hepatocytes insulin acts on the hepatocytes to stimulate the action of several enzymes including glycogen synthase glucose molecules are added to the chains of glycogen as long as both insulin and glucose remain plentiful in this postprandial or fed state the liver takes in more glucose from the blood than it releases after a meal has been digested and glucose levels begin to fall insulin secretion is reduced and glycogen synthesis stops when it is needed for energy glycogen is broken down and converted again to glucose glycogen phosphorylase is the primary enzyme of glycogen breakdown for the next 8 – 12 hours glucose derived from liver glycogen is the primary source of blood glucose used by the rest of the body for fuel glucagon another hormone produced by the pancreas in many respects serves as a countersignal to insulin in response to insulin levels being below normal when blood levels of glucose begin to fall below the normal range glucagon is secreted in increasing amounts and stimulates both glycogenolysis the breakdown of glycogen and gluconeogenesis the production of glucose from other sources muscle glycogen appears to function as an immediate reserve source of available phosphorylated glucose in the form of glucose1phosphate for muscle cells glycogen contained within skeletal muscle cells are primarily in the form of β particles other cells that contain small amounts use it locally as well as muscle cells lack glucose6phosphatase which is required to pass glucose into the blood the glycogen they store is available solely for internal use and is not shared with other cells this is in contrast to liver cells which on demand readily do break down their stored glycogen into glucose and send it through the blood stream as fuel for other organsskeletal muscle needs atp provides energy for muscle contraction and relaxation in what is known as the sliding filament theory skeletal muscle relies predominantly on glycogenolysis for the first few minutes as it transitions from rest to activity as well as throughout highintensity aerobic activity and all anaerobic activity during anaerobic activity such as weightlifting and isometric exercise the phosphagen system atppcr and muscle glycogen are the only substrates used as they do not require oxygen nor blood flowdifferent bioenergetic systems produce atp at different speeds with atp produced'</li><li>'glycogen storage disease type i gsd i is an inherited disease that prevents the liver from properly breaking down stored glycogen which is necessary to maintain adequate blood sugar levels gsd i is divided into two main types gsd ia and gsd ib which differ in cause presentation and treatment there are also possibly rarer subtypes the translocases for inorganic phosphate gsd ic or glucose gsd id however a recent study suggests that the biochemical assays used to differentiate gsd ic and gsd id from gsd ib are not reliable and are therefore gsd ibgsd ia is caused by a deficiency in the enzyme glucose6phosphatase gsd ib a deficiency in the transport protein glucose6phosphate translocase because glycogenolysis is the principal metabolic mechanism by which the liver supplies glucose to the body during fasting both deficiencies cause severe hypoglycemia and over time excess glycogen storage in the liver and in some cases in the kidneys because of the glycogen buildup gsd i patients typically present with enlarged livers from nonalcoholic fatty liver disease other functions of the liver and kidneys are initially intact in gsd i but are susceptible to other problems without proper treatment gsd i causes chronic low blood sugar which can lead to excessive lactic acid and abnormally high lipids in the blood and other problems frequent feedings of cornstarch or other carbohydrates are the principal treatment for all forms of gsd i gsd ib also features chronic neutropenia due to a dysfunction in the production of neutrophils in the bone marrow this immunodeficiency if untreated makes gsd ib patients susceptible to infection the principal treatment for this feature of gsd ib is filgrastim however patients often still require treatment for frequent infections and a chronically enlarged spleen is a common side effect gsd ib patients often present with inflammatory bowel diseaseit is the most common of the glycogen storage diseases gsd i has an incidence of approximately 1 in 100000 births in the american population and approximately 1 in 20000 births among ashkenazi jews the disease was named after german doctor edgar von gierke who first described it in 1929 early research into gsd i identified numerous clinical manifestations falsely thought to be primary features of the genetic disorder however continuing research has revealed that these clinical features are the consequences of only one in gsd ia or two in gsd ib'</li><li>'##patic arteries and threaded through the gastroduodenal mostly or celiac artery the catheter is fixed in this position and the pump is placed in a subcutaneous pocket finally to confirm adequate placement and hepatic perfusion and to rule out extrahepatic perfusion a dye fluorescein or methylene blue is injected into the pump after the procedure and before starting the hai based treatment a technetium 99mlabeled macroaggregated albumin scan is performed to again confirm adequate hepatic perfusion and no misperfusion outside of the liver the complications of hai therapy can be divided into those related to the surgical placement of the pump technical catheterrelated complications and those related to the chemotherapeutic agents usedrelating to the surgical hai pump placement early postoperative complications consist of arterial injury leading to hepatic artery thrombosis inadequate perfusion of the entire liver due to the inability to identify an accessory hepatic artery extrahepatic perfusion to the stomach or duodenum or hematoma formation in the subcutaneous pump pocket late complications are more common and include inflammation or ulceration of the stomach or duodenum and pump pocket infectionthe most common catheter related complications include displacement of the catheter occlusion of the hepatic artery because of the catheter and catheter thrombosis these catheter related complications dont occur as frequently with increased surgical experience and with improvements in pump designthe most common toxicities caused by the chemotherapeutic agents were gastrointestinal symptoms chemical hepatitis and bone marrow inhibition it is important to note that the most serious and dose limiting complication of hai is hepatobiliary toxicity this occurs more commonly with fudr than any other chemotherapeutic agent patients undergoing hai therapy therefore have regular liver function tests to monitor any damage to the liver as previously mentioned studies have been carried out to come up with treatment algorithms to minimize this serious side effect it has been shown that adding leucovorin and fudr for infusion through the pump not only reduces the biliary toxicity of the drug but also increases the response rate however biliary sclerosis is not seen with hai using 5fu 5fu is associated with an increased risk of myelosuppression logically it would make sense to therefore consider alternating between hai fudr and hai 5fu'</li></ul> |
| 11 | <ul><li>'and arms within the cranium the two vertebral arteries fuse into the basilar artery posterior inferior cerebellar artery pica basilar artery supplies the midbrain cerebellum and usually branches into the posterior cerebral artery anterior inferior cerebellar artery aica pontine branches superior cerebellar artery sca posterior cerebral artery pca posterior communicating artery the venous drainage of the cerebrum can be separated into two subdivisions superficial and deep the superficial systemthe superficial system is composed of dural venous sinuses sinuses channels within the dura mater the dural sinuses are therefore located on the surface of the cerebrum the most prominent of these sinuses is the superior sagittal sinus which is located in the sagittal plane under the midline of the cerebral vault posteriorly and inferiorly to the confluence of sinuses where the superficial drainage joins with the sinus that primarily drains the deep venous system from here two transverse sinuses bifurcate and travel laterally and inferiorly in an sshaped curve that forms the sigmoid sinuses which go on to form the two jugular veins in the neck the jugular veins parallel the upward course of the carotid arteries and drain blood into the superior vena cava the veins puncture the relevant dural sinus piercing the arachnoid and dura mater as bridging veins that drain their contents into the sinus the deep venous systemthe deep venous system is primarily composed of traditional veins inside the deep structures of the brain which join behind the midbrain to form the great cerebral vein vein of galen this vein merges with the inferior sagittal sinus to form the straight sinus which then joins the superficial venous system mentioned above at the confluence of sinuses cerebral blood flow cbf is the blood supply to the brain in a given period of time in an adult cbf is typically 750 millilitres per minute or 15 of the cardiac output this equates to an average perfusion of 50 to 54 millilitres of blood per 100 grams of brain tissue per minute cbf is tightly regulated to meet the brains metabolic demands too much blood a clinical condition of a normal homeostatic response of hyperemia can raise intracranial pressure icp which can compress and damage delicate brain tissue too little blood flow ischemia results if blood flow to the brain is below 18 to 20 ml per 100 g per minute and tissue death occurs if flow dips below 8 to'</li><li>'##ie b infection it is mostly unnecessary for treatment purposes to diagnose which virus is causing the symptoms in question though it may be epidemiologically useful coxsackie b infections usually do not cause serious disease although for newborns in the first 1 – 2 weeks of life coxsackie b infections can easily be fatal the pancreas is a frequent target which can cause pancreatitiscoxsackie b3 cb3 infections are the most common enterovirus cause of myocarditis and sudden cardiac death cb3 infection causes ion channel pathology in the heart leading to ventricular arrhythmia studies in mice suggest that cb3 enters cells by means of tolllike receptor 4 both cb3 and cb4 exploit cellular autophagy to promote replication the b4 coxsackie viruses cb4 serotype was suggested to be a possible cause of diabetes mellitus type 1 t1d an autoimmune response to coxsackie virus b infection upon the islets of langerhans may be a cause of t1dother research implicates strains b1 a4 a2 and a16 in the destruction of beta cells with some suggestion that strains b3 and b6 may have protective effects via immunological crossprotection as of 2008 there is no wellaccepted treatment for the coxsackie b group of viruses palliative care is available however and patients with chest pain or stiffness of the neck should be examined for signs of cardiac or central nervous system involvement respectively some measure of prevention can usually be achieved by basic sanitation on the part of foodservice workers though the viruses are highly contagious care should be taken in washing ones hands and in cleaning the body after swimming in the event of coxsackieinduced myocarditis or pericarditis antiinflammatories can be given to reduce damage to the heart muscle enteroviruses are usually only capable of acute infections that are rapidly cleared by the adaptive immune response however mutations which enterovirus b serotypes such as coxsackievirus b and echovirus acquire in the host during the acute phase can transform these viruses into the noncytolytic form also known as noncytopathic or defective enterovirus this form is a mutated quasispecies of enterovirus which is capable of causing persistent infection in human tissues and such infections have been found in the pancreas in type 1 diabetes in chronic myocarditis and dilated cardiomyopathy in valvular'</li><li>'the biomedical research center brc is a research center at qatar university focusing on biomedical research brc was founded in 2014 and partners with the ministry of public health qatar and hamad medical corporation hmc the incidence of genetic disorders in qatar is high with the top three causes of death in the country cancer heart diseases and diabetes the government saw the creation of brc as a strategy for proactively preventing diseases to foster public healthbrc labs received the isoiec 17025 accreditation from the american association for laboratory accreditation a2la the centres research activities focus on the domains of infectious diseases virology and microbiology metabolic disorders and biomedical omics since its inauguration in 2014 brc researchers have published research papers with more than 530 publicationsthe centres research projects include antibiotic profiling of antibiotics resistant microbes in humans and animals one health approach identified for the first time the reason of why some obese people gets type2 diabetes while others do not conducted six research on covid19 to assist in fighting and recovery provided a study on protection against the omicron variant in qatar decoded the genetic code of qatari falcons and various endangered animal species dna sequence of the dugong sea cow study a nanomedicinebased preventative strategy to controlling diseases and improve health brc introduced the use of zebrafish as an animal model in biomedical research at qu and established a facility for it in 2015 the facility is used as a research unit to study many genetic diseases therefore ministry of public health qatar clearly articulated an institutional research policy irp on human use of zebrafish in research and qu circulated it to qu community for implementation the brc facilities include biosafety level 3 bsl3 built by certek usa it is equipped for viral and bacterial research on risk group 3 pathogens sequencing unit to conduct stateoftheart research in genomics mariam al maadeed sidra medical and research center'</li></ul> |
| 17 | <ul><li>'and rainfall there are many ways to date a core once dated it gives valuable information about changes of climate and terrain for example cores in the ocean floor soil and ice have altered the view of the geologic history of the pleistocene entirely reverse circulation drilling is a method in which rock cuttings are continuously extracted through the hollow drill rod and can be sampled for analysis the method may be faster and use less water than core drilling but does not produce cores of relatively undisturbed material so less information on the rock structure can be derived from analysis if compressed air is used for cutting extraction the sample remains uncontaminated is available almost immediately and the method has a low environmental impact core drill ice core integrated ocean drilling program scientific drilling'</li><li>'##cial environments tend to be found in higher latitudes since there is more land at these latitudes in the north most of this effect is seen in the northern hemisphere however in lower latitudes the direct effect of the suns radiation is greater so the freezethaw effect is seen but permafrost is much less widespread altitude – air temperature drops by approximately 1 °c for every 100 m rise above sea level this means that on mountain ranges modern periglacial conditions are found nearer the equator than they are lower down ocean currents – cold surface currents from polar regions reduce mean average temperatures in places where they exert their effect so that ice caps and periglacial conditions will show nearer to the equator as in labrador for example conversely warm surface currents from tropical seas increases mean temperatures the cold conditions are then found only in more northerly places this is apparent in western north america which is affected by the north pacific current in the same way but more markedly the gulf stream affects western europe continentality – away from the moderating influence of the ocean seasonal temperature variation is more extreme and freezethaw goes deeper in the centres of canada and siberia the permafrost typical of periglaciation goes deeper and extends further towards the equator similarly solifluction associated with freezethaw extends into somewhat lower latitudes than on western coasts periglaciation results in a variety of ground conditions but especially those involving irregular mixed deposits created by ice wedges solifluction gelifluction frost creep and rockfalls periglacial environments trend towards stable geomorphologies coombe and head deposits – coombe deposits are chalk deposits found below chalk escarpments in southern england head deposits are more common below outcrops of granite on dartmoor patterned ground – patterned ground occurs where stones form circles polygons and stripes local topography affects which of these are expressed a process called frost heaving is responsible for these features solifluction lobes – solifluction lobes are formed when waterlogged soil slips down a slope due to gravity forming u shaped lobes blockfields or felsenmeer – blockfields are areas covered by large angular blocks traditionally believed to have been created by freezethaw action a good example of a blockfield can be found in the snowdonia national park wales blockfields are common in the unglaciated parts of the appalachian mountains in the northeastern united states such as at the river of rocks or hickory run boulder field lehigh county pennsylvaniaother landforms include bratschen palsa periglacial lake pingo'</li><li>'climate was cooler during the overarching little ice age than it is today ice cores scientists have studied the chemical composition of ice cores long tubes of ice that are drilled from glaciers and ice sheets to learn of past climate conditions tree rings the width of tree rings can be used to reconstruct past climate conditions as trees grow more slowly in cooler temperatures tree ring data from the little ice age seems to prove a reduction in solar activityoverall the evidence suggests that the amount of solar radiation reaching the earths surface was slightly lower during the grindelwald fluctuation and this reduction in solar radiation is thought to have contributed to the expansion of the glaciers human activities such as deforestation and land use changes are known to negatively affect local climate patterns william ruddiman a palaeoclimatologist proposed the hypothesis that human activity has been affecting the earths climate for much longer than previously thought in particular ruddiman has argued that the early adoption of agriculture and landuse practices by human societies beginning around 8000 years ago led to the release of significant amounts of greenhouse gases into the atmosphere which may have contributed to the warming of the earths climateit is difficult to accurately assess the extent of depopulation that occurred during both the 1500s and 1600s as reliable population data from this period is limited however it is known that this period was one of significant upheaval and change with many regions experiencing significant population drops due to wars plagues famines and natural disasters the bubonic plague for instance killed between 75 and 200 million people in europe alone it is also believed that an onset of disease during the little ice age may have led to further depopulationthis decline in population meant that cultivated lands became unkempt allowing for the regrowth of wild plants this is perceived to be the cause for the drop in atmospheric carbon dioxide in the sixteenth century thus exacerbating the extreme cooling period however of the causes depopulation is the least significant in historical records the grindelwald fluctuation is characterised by a further drop in temperatures and more frequent cold spells throughout many parts of the world the more notable records written by a jacobean weather enthusiast in bristol chronicle some of the effects the weather fluctuation had on agriculture and society they specifically discuss food shortages and crop failures taking precedence throughout the area'</li></ul> |
| 14 | <ul><li>'needle aspiration fna biopsy can be fast and least painful a very thin hollow needle and slight suction will be used to remove a small sample from under the nipple using a local anesthetic to numb the skin may not be necessary since a thin needle is used for the biopsy receiving an injection to prevent pain from the biopsy may be more painful than the biopsy itselfsome men develop a condition known as gynecomastia in which the breast tissue under the nipple develops and grows discharge from the nipple can occur the nipple may swell in some men possibly due to increased levels of estrogen changes in appearance may be normal or related to disease inverted nipples – this is normal if the nipples have always been indented inward and can easily point out when touched if the nipples are pointing in and this is new this is an unexpected change skin puckering of the nipple – this can be caused by scar tissue from surgery or an infection often scar tissue forms for no reason most of the time this issue does not need treatment this is an unexpected change this change can be of concern since puckering or retraction of the nipple can indicate an underlying change in breast tissue that may be cancerous the nipple is warm to the touch red or painful – this can be an infection it is rarely due to breast cancer scaly flaking or itchy nipple – this is most often due to eczema or a bacterial or fungal infection this change is not expected flaking scaly or itchy nipples can be a sign of pagets disease thickened skin with large pores – this is called peau dorange because the skin looks like an orange peel an infection in the breast or inflammatory breast cancer can cause this problem this is not an expected change retracted nipples – the nipple was raised above the surface but changes begins to pull inward and does not come out when stimulatedthe average projection and size of human female nipples is slightly more than 3⁄8 inch 95 mm symptoms of breast cancer can often be seen first by changes of the nipple and areola although not all women have the same symptoms and some people do not have any signs or symptoms at all a person may find out they have breast cancer after a routine mammogram warning signs can include new lump in the nipple or breast or armpit thickening or swelling of part of the breast areola or nipple irritation or dimpling of breast skin redness or flaky skin in the nipple area or the breast pulling in of the nipple or pain in the nipple area nipple discharge other than breast milk including blood any change'</li><li>'the mother over the chorion frondosum this part of the endometrium is called the decidua basalis forms the decidual plate the decidual plate is tightly attached to the chorion frondosum and goes on to form the actual placenta endometrium on the opposite side to the decidua basalis is the decidua parietalis this fuses with the chorion laevae thus filling up the uterine cavityin the case of twins dichorionic placentation refers to the presence of two placentas in all dizygotic and some monozygotic twins monochorionic placentation occurs when monozygotic twins develop with only one placenta and bears a higher risk of complications during pregnancy abnormal placentation can lead to an early termination of pregnancy for example in preeclampsia as placentation often results during the evolution of live birth the more than 100 origins of live birth in lizards and snakes squamata have seen close to an equal number of independent origins of placentation this means that the occurrence of placentation in squamata is more frequent than in all other vertebrates combined making them ideal for research on the evolution of placentation and viviparity itself in most squamates two separate placentae form utilising separate embryonic tissue the chorioallantoic and yolksac placentae in species with more complex placentation we see regional specialisation for gas amino acid and lipid transport placentae form following implantation into uterine tissue as seen in mammals and formation is likely facilitated by a plasma membrane transformationmost reptiles exhibit strict epitheliochorial placentation eg pseudemoia entrecasteauxii however at least two examples of endotheliochorial placentation have been identified mabuya sp and trachylepis ivensi unlike eutherian mammals epitheliochorial placentation is not maintained by maternal tissue as embryos do not readily invade tissues outside of the uterus the placenta is an organ that has evolved multiple times independently evolved relatively recently in some lineages and exists in intermediate forms in living species for these reasons it is an outstanding model to study the evolution of complex organs in animals research into the genetic mechanisms that underpin the evolution of the placenta have been conducted in a diversity of animals including reptiles seahorses and mammalsthe genetic processes that support the evolution of the placenta can be best understood by separating those that result'</li><li>'the myometrium once these cells penetrate through the first few layers of cells of the decidua they lose their ability to proliferate and become invasive this departure from the cell cycle seems to be due to factors such as tgfβ and decorin although these invasive interstitial cytotrophoblasts can no longer divide they retain their ability to form syncytia multinucleated giant cells small syncytia are found in the placental bed and myometrium as a result of the fusion of interstitial cytotrophoblastsinterstitial cytotrophoblasts may also transform into endovascular cytotrophoblasts the primary function of the endovascular cytotrophoblast is to penetrate maternal spiral arteries and route the blood flow through the placenta for the growing embryo to use they arise from interstitial cytotrophoblasts from the process of phenocopying this changes the phenotype of these cells from epithelial to endothelial endovascular cytotrophoblasts like their interstitial predecessor are nonproliferating and invasive proper cytotrophoblast function is essential in the implantation of a blastocyst after hatching the embryonic pole of the blastocyst faces the uterine endometrium once they make contact the trophoblast begins to rapidly proliferate the cytotrophoblast secretes proteolytic enzymes to break down the extracellular matrix between the endometrial cells to allow fingerlike projections of trophoblast to penetrate through projections of cytotrophoblast and syncytiotrophoblast pull the embryo into the endometrium until it is fully covered by endometrial epithelium save for the coagulation plug the most common associated disorder is preeclampsia affecting approximately 7 of all births it is characterized by a failure of the cytotrophoblast to invade the uterus and its vasculature specifically the spiral arteries that the endovascular cytotrophoblast should invade the result of this is decreased blood flow to the fetus which may cause intrauterine growth restriction clinical symptoms of preeclampsia in the mother are most commonly high blood pressure proteinuria and edema conversely if there is too much invasion of uterine tissue by the trophoblast then'</li></ul> |
| 36 | <ul><li>'to some decision or course of action socrates great myth illustrates this motif most clearly when the soul is depicted as a charioteer and its horses being led around a heavenly circuit this is the occasion for the first appearance in platos dialogues of the prominent platonic doctrine that life is motion the soul being the principle or source of life is that which moves itself as opposed to inanimate objects that require an external source of motion to move them the view that life is selfmotion and that the soul is a selfmover is used by plato to guarantee the immortality of the soul making this a novel argument for the souls immortality not found in the phaedo plato relies further on the view that the soul is a mind in order to explain how its motions are possible plato combines the view that the soul is a selfmover with the view that the soul is a mind in order to explain how the soul can move things in the first place eg how it can move the body to which it is attached in life souls move things by means of their thoughts in thomas manns novella death in venice the narrators young love tadzio is associated with phaedrus in mary renaults 1953 novel the charioteer a text of phaedrus is passed among the characters gay men during world war ii and the image of the charioteer and his white and black horses recurs as the protagonist struggles to choose between consummated and unconsummated love in a key scene from the film adaptation of maurice students including maurice attend dean cornwalliss translation class in which two undergraduates orally translate into english the text based on phaedrus stephanus 251a 255a – e during which the dean instructs one to omit the reference to the unspeakable vice of the greeks the 2016 film knight of cups by terrence malick is inspired in part by phaedrus in robert m pirsigs fictionalized autobiographical novel zen and the art of motorcycle maintenance pirsig refers to his past self from before undergoing electroconvulsive therapy in the third person and using the name phaedrus intended to reflect his opposition to certain educational and philosophical ideas the character reappears in the followup lila an inquiry into morals in virginia woolfs 1922 novel jacobs room jacob reads phaedrus alone in his room after a visit to the enormous mind as woolf characterizes the british museum jowett translation at standardebooks greek text at perseus plato nichols j h tr and ed phaedrus cornell university press'</li><li>'other lacks so much the betterthe first two of young becker and pikes four phases of written rogerian argument are based on the first two of rapoports three principles of ethical debate the third of rapoports principles — increasing the perceived similarity between self and other — is a principle that young becker and pike considered to be equally as important as the other two but they said it should be an attitude assumed throughout the discourse and is not a phase of writingmaxine hairston in a section on rogerian or nonthreatening argument in her textbook a contemporary rhetoric advised that one shouldnt start writing with a detailed plan in mind but might start by making four lists the others concerns ones own key points anticipated problems and points of agreement or common ground she gave a different version of young becker and pikes four phases which she expanded to five and called elements of the nonthreatening argument a brief and objective statement of the issue a neutrally worded analysis of the others position a neutrally worded analysis of ones own position a statement of the common aspects goals and values that the positions share and a proposal for resolving the issue that shows how both sides may gain she said that the rogerian approach requires calm patience and effort and will work if one is more concerned about increasing understanding and communication than about scoring a triumph in a related article she noted the similarity between rogerian argument and john stuart mills wellknown phrase from on liberty he who knows only his own side of the case knows little of thatrobert keith millers textbook the informed argument first published in 1986 presented five phases adapted from an earlier textbook by richard coe millers phases were an introduction to the problem a summary of views that oppose the writers position a statement of understanding of the region of validity of the opposing views a statement of the writers position a statement of the situations in which the writers position has merit and a statement of the benefits of accepting the writers positionin 1992 rebecca stephens built on the vague and abstract rogerian principles of other rhetoricians to create a set of 23 concrete and detailed questions that she called a rogerianbased heuristic for rhetorical invention intended to help people think in a rogerian way while discovering ideas and arguments for example the first two of her 23 questions are what is the nature of the issue in general terms and she recommended that the answer should itself be stated as a question and whose lives are affected by the issue the last two questions are what would have to happen to eliminate the disagreement among the opposing groups and what are the chances that this will occur lisa'</li><li>'reestablishes equilibrium and health in the collective imaginary which are jeopardized by the repressive aspects of societythe state of political satire in a given society reflects the tolerance or intolerance that characterizes it and the state of civil liberties and human rights under totalitarian regimes any criticism of a political system and especially satire is suppressed a typical example is the soviet union where the dissidents such as aleksandr solzhenitsyn and andrei sakharov were under strong pressure from the government while satire of everyday life in the ussr was allowed the most prominent satirist being arkady raikin political satire existed in the form of anecdotes that made fun of soviet political leaders especially brezhnev famous for his narrowmindedness and love for awards and decorations satire is a diverse genre which is complex to classify and define with a wide range of satiric modes satirical literature can commonly be categorized as either horatian juvenalian or menippean horatian horatian satire named for the roman satirist horace 65 – 8 bce playfully criticizes some social vice through gentle mild and lighthearted humour horace quintus horatius flaccus wrote satires to gently ridicule the dominant opinions and philosophical beliefs of ancient rome and greece rather than writing in harsh or accusing tones he addressed issues with humor and clever mockery horatian satire follows this same pattern of gently ridiculing the absurdities and follies of human beingsit directs wit exaggeration and selfdeprecating humour toward what it identifies as folly rather than evil horatian satires sympathetic tone is common in modern society a horatian satirists goal is to heal the situation with smiles rather than by anger horatian satire is a gentle reminder to take life less seriously and evokes a wry smile juvenalian juvenalian satire named for the writings of the roman satirist juvenal late first century – early second century ad is more contemptuous and abrasive than the horatian juvenal disagreed with the opinions of the public figures and institutions of the republic and actively attacked them through his literature he utilized the satirical tools of exaggeration and parody to make his targets appear monstrous and incompetent juvenals satire follows this same pattern of abrasively ridiculing societal structures juvenal also unlike horace attacked public officials and governmental organizations through his satires regarding their opinions as not just wrong but evil following in this tradition juvenalia'</li></ul> |
| 27 | <ul><li>'rod is so small newtons third law of physics applies for any action there is a reaction when the electrons are pulled across the surface of the rod so too is the rod pulled in the opposite direction the first recorded success of a nanosubmarine was performed by a team of students led by dan peer from tel aviv university in israel this was a continuation to peers work at harvard on nanosubmarines and targeted drug delivery tests have proven successful in delivering drugs to heal mice with ulcerative colitis tests will continue and the team plans to experiment on the human body soon fantastic voyage novel and movie based on the nanosubmarine theme'</li><li>'electronbeaminduced deposition ebid is a process of decomposing gaseous molecules by an electron beam leading to deposition of nonvolatile fragments onto a nearby substrate the electron beam is usually provided by a scanning electron microscope which results in high spatial accuracy potentially below one nanometer and the possibility to produce freestanding threedimensional structures the focused electron beam of a scanning electron microscope sem or scanning transmission electron microscope stem is commonly used another method is ionbeaminduced deposition ibid where a focused ion beam is applied instead precursor materials are typically liquid or solid and gasified prior to deposition usually through vaporization or sublimation and introduced at accurately controlled rate into the highvacuum chamber of the electron microscope alternatively solid precursors can be sublimated by the electron beam itself when deposition occurs at a high temperature or involves corrosive gases a specially designed deposition chamber is used it is isolated from the microscope and the beam is introduced into it through a micrometresized orifice the small orifice size maintains differential pressure in the microscope vacuum and deposition chamber no vacuum such deposition mode has been used for ebid of diamondin the presence of the precursor gas the electron beam is scanned over the substrate resulting in deposition of material the scanning is usually computercontrolled the deposition rate depends on a variety of processing parameters such as the partial precursor pressure substrate temperature electron beam parameters applied current density etc it usually is in the order of 10 nms primary electron energies in sems or stems are usually between 10 and 300 kev where reactions induced by electron impact ie precursor dissociation have a relatively low cross section the majority of decomposition occurs via low energy electron impact either by low energy secondary electrons which cross the substratevacuum interface and contribute to the total current density or inelastically scattered backscattered electrons primary stem electrons can be focused into spots as small as 0045 nm while the smallest structures deposited so far by ebid are point deposits of 07 nm diameter deposits usually have a larger lateral size than the beam spot size the reason are the socalled proximity effects meaning that secondary backscattered and forward scattered if the beam dwells on already deposited material electrons contribute to the deposition as these electrons can leave the substrate up to several microns away from the point of impact of the electron beam depending on its energy material deposition is not necessarily confined to the irradiated spot to overcome this problem compensation algorithms can be applied which is typical for electron beam lithography as of 2008 the range of materials deposited by ebid included al au amor'</li><li>'##onment this presents a challenge in maintaining protein arrays in a stable condition over extended periods of time in situ methods — invented and published by mingyue he and michael taussig in 2001 — involve onchip synthesis of proteins as and when required directly from the dna using cellfree protein expression systems since dna is a highly stable molecule it does not deteriorate over time and is therefore suited to longterm storage this approach is also advantageous in that it circumvents the laborious and often costly processes of separate protein purification and dna cloning since proteins are made and immobilised simultaneously in a single step on the chip surface examples of in situ techniques are pisa protein in situ array nappa nucleic acid programmable protein array and dapa dna array to protein array there are three types of protein microarrays that are currently used to study the biochemical activities of proteins analytical microarrays are also known as capture arrays in this technique a library of antibodies aptamers or affibodies is arrayed on the support surface these are used as capture molecules since each binds specifically to a particular protein the array is probed with a complex protein solution such as a cell lysate analysis of the resulting binding reactions using various detection systems can provide information about expression levels of particular proteins in the sample as well as measurements of binding affinities and specificities this type of microarray is especially useful in comparing protein expression in different solutions for instance the response of the cells to a particular factor can be identified by comparing the lysates of cells treated with specific substances or grown under certain conditions with the lysates of control cells another application is in the identification and profiling of diseased tissues reverse phase protein microarray rppa involve complex samples such as tissue lysates cells are isolated from various tissues of interest and are lysed the lysate is arrayed onto the microarray and probed with antibodies against the target protein of interest these antibodies are typically detected with chemiluminescent fluorescent or colorimetric assays reference peptides are printed on the slides to allow for protein quantification of the sample lysates rpas allow for the determination of the presence of altered proteins or other agents that may be the result of disease specifically posttranslational modifications which are typically altered as a result of disease can be detected using rpas functional protein microarrays also known as target protein arrays are constructed by immobilising large numbers of purified proteins and are used to'</li></ul> |
| 9 | <ul><li>'a circular chromosome is a chromosome in bacteria archaea mitochondria and chloroplasts in the form of a molecule of circular dna unlike the linear chromosome of most eukaryotes most prokaryote chromosomes contain a circular dna molecule – there are no free ends to the dna free ends would otherwise create significant challenges to cells with respect to dna replication and stability cells that do contain chromosomes with dna ends or telomeres most eukaryotes have acquired elaborate mechanisms to overcome these challenges however a circular chromosome can provide other challenges for cells after replication the two progeny circular chromosomes can sometimes remain interlinked or tangled and they must be resolved so that each cell inherits one complete copy of the chromosome during cell division the circular bacteria chromosome replication is best understood in the wellstudied bacteria escherichia coli and bacillus subtilis chromosome replication proceeds in three major stages initiation elongation and termination the initiation stage starts with the ordered assembly of initiator proteins at the origin region of the chromosome called oric these assembly stages are regulated to ensure that chromosome replication occurs only once in each cell cycle during the elongation phase of replication the enzymes that were assembled at oric during initiation proceed along each arm replichore of the chromosome in opposite directions away from the oric replicating the dna to create two identical copies this process is known as bidirectional replication the entire assembly of molecules involved in dna replication on each arm is called a replisome at the forefront of the replisome is a dna helicase that unwinds the two strands of dna creating a moving replication fork the two unwound single strands of dna serve as templates for dna polymerase which moves with the helicase together with other proteins to synthesise a complementary copy of each strand in this way two identical copies of the original dna are created eventually the two replication forks moving around the circular chromosome meet in a specific zone of the chromosome approximately opposite oric called the terminus region the elongation enzymes then disassemble and the two daughter chromosomes are resolved before cell division is completed the e coli origin of replication called oric consists of dna sequences that are recognised by the dnaa protein which is highly conserved amongst different bacterial species dnaa binding to the origin initiates the regulated recruitment of other enzymes and proteins that will eventually lead to the establishment of two complete replisomes for bidirectional replicationdna sequence elements within oric that are important for its function include dnaa boxes a 9mer repeat with a highly'</li><li>'the second step of this process has recently fallen into question for the past few decades the common view was that a trimeric multiheme ctype hao converts hydroxylamine into nitrite in the periplasm with production of four electrons 12 the stream of four electrons is channeled through cytochrome c554 to a membranebound cytochrome c552 two of the electrons are routed back to amo where they are used for the oxidation of ammonia quinol pool the remaining two electrons are used to generate a proton motive force and reduce nadp through reverse electron transportrecent results however show that hao does not produce nitrite as a direct product of catalysis this enzyme instead produces nitric oxide and three electrons nitric oxide can then be oxidized by other enzymes or oxygen to nitrite in this paradigm the electron balance for overall metabolism needs to be reconsidered nitrite produced in the first step of autotrophic nitrification is oxidized to nitrate by nitrite oxidoreductase nxr 2 it is a membraneassociated ironsulfur molybdo protein and is part of an electron transfer chain which channels electrons from nitrite to molecular oxygen the enzymatic mechanisms involved in nitriteoxidizing bacteria are less described than that of ammonium oxidation recent research eg woznica a et al 2013 proposes a new hypothetical model of nob electron transport chain and nxr mechanisms here in contrast to earlier models the nxr would act on the outside of the plasma membrane and directly contribute to a mechanism of proton gradient generation as postulated by spieck and coworkers nevertheless the molecular mechanism of nitrite oxidation is an open question the twostep conversion of ammonia to nitrate observed in ammoniaoxidizing bacteria ammoniaoxidizing archaea and nitriteoxidizing bacteria such as nitrobacter is puzzling to researchers complete nitrification the conversion of ammonia to nitrate in a single step known as comammox has an energy yield ∆g° ′ of −349 kj mol−1 nh3 while the energy yields for the ammoniaoxidation and nitriteoxidation steps of the observed twostep reaction are −275 kj mol−1 nh3 and −74 kj mol−1 no2− respectively these values indicate that it would be energetically favourable for an organism to carry out complete nitrification from ammonia to nitrate comammox rather'</li><li>'young animals and nonnative breeds the clinical signs of disease are caused by an increased vascular permeability and consequent oedema and hypovolemia the symptoms include neurological signs such as tremors and head pressing respiratory signs such as coughing and nasal discharge and systemic signs such as fever and loss of appetite physical examination may reveal petechiae of the mucous membranes tachycardia and muffled heart sounds heartwater can also cause reproductive and gastrointestinal disease it is frequently fatal on post mortem examination a light yellow transudate that coagulates on exposure to air is often found within the thorax pericardium and abdomen most fatal cases have the hydropericardium that gives the disease its common name pulmonary oedema and mucosal congestion are regularly seen along with frothy fluid in the airways and cut surfaces of the lungs to definitively diagnose the disease c ruminantium must be demonstrated either in preparations of the hippocampus under giemsa staining or by histopathology of brain or kidney during the early stages of disease animals may be treated with sulfonamides and tetracyclines in advanced disease prognosis is poor tetracyclines can also be used prophylactically when animals are introduced into an area endemic with heartwater ectoparasiticides used as dips can be used to reduce exposure the animals exposure to bont ticks in areas endemic for heartwater the use of dips against other ticks of domestic animals such as rhipicephalus boophilus and hyalomma species is likely and this will usually contribute to control of vectors of e ruminantium a live blood vaccine is available for protection of young stock but animals may require treatment for the disease after vaccination several experimental vaccines are currently being developed examples include attenuated recombinant and multiepitope dna vaccines depending on the species of the animal the mortality rate of the disease may vary from 5 to 90 mortality rates appear to be the highest within the various sheep and goat species but this is not always the case as some sheep species such as the afrikaner have mortality rates only reaching as high as 6 heartwater is notifiable to the world organization for animal health the us department of agriculture believes that an outbreak in the us could cost the livestock industry up to 762 million in losses annually the tick that carries the disease is thought to be capable of being transported by migratory birds from the caribbean to at least florida the'</li></ul> |
| 29 | <ul><li>'fixed circle of latitude or zonal region if the coriolis parameter is large the effect of the earths rotation on the body is significant since it will need a larger angular frequency to stay in equilibrium with the coriolis forces alternatively if the coriolis parameter is small the effect of the earths rotation is small since only a small fraction of the centripetal force on the body is canceled by the coriolis force thus the magnitude of f displaystyle f strongly affects the relevant dynamics contributing to the bodys motion these considerations are captured in the nondimensionalized rossby number in stability calculations the rate of change of f displaystyle f along the meridional direction becomes significant this is called the rossby parameter and is usually denoted β ∂ f ∂ y displaystyle beta frac partial fpartial y where y displaystyle y is the in the local direction of increasing meridian this parameter becomes important for example in calculations involving rossby waves beta plane earths rotation rossbygravity waves'</li><li>'of silicic acid to nitrate because larger diatoms that require silicic acid to make their opal silica shells are less prevalent unlike the southern ocean and the north pacific the equatorial pacific experiences temporal silicate availability which leads to large seasonal diatom bloomsthe distribution of trace metals and relative abundance of macronutrients are reflected in the plankton community structure for example the selection of phytoplankton with a high surface area to volume ratio results in hnlc regions being dominated by nano and picoplankton this ratio allows for optimal utilization of available dissolved nutrients larger phytoplankton such as diatoms cannot energetically sustain themselves in these regions common picoplankton within these regions include genera such as prochlorococcus not generally found in the north pacific synechococcus and various eukaryotes grazing protists likely control the abundance and distribution of these small phytoplanktonthe generally lower net primary production in hnlc zones results in lower biological drawdown of atmospheric carbon dioxide and thus these regions are generally considered a net source of carbon dioxide to the atmosphere hnlc areas are of interest to geoengineers and some in the scientific community who believe fertilizing large patches of these waters with iron could potentially lower dissolved carbon dioxide and offset increased anthropogenic carbon emissions analysis of antarctic ice core data over the last million years shows correlation between high levels of dust and low temperature indicating that addition of diffuse ironrich dust to the sea has been a natural amplifier of climate cooling the discovery and naming of the first hnlc region the north pacific was formalized in a seminal paper published in 1988 the study concluded that surface waters of the eastern north pacific are generally dominated by picoplankton despite the relative abundance of macronutrients in other words larger phytoplankton such as diatoms which thrive in nutrientrich waters were not found instead the surface waters were replete with smaller pico and nanoplankton based on laboratory nutrient experiments iron was hypothesized to be a key limiting micronutrientthe pacific ocean is the largest and oldest body of water on earth the north pacific is characterized by the general clockwise rotation of the north pacific gyre which is driven by trade winds spatial variations in tradewinds result in cooler air temperatures in the western north pacific and milder air temperatures in the eastern north pacific ie subarctic pacific iron is supplied to the north pacific by dust storms that occur in asia'</li><li>'atmospheric pressure 101325 pa whereas water has a density of 09998 – 0999863 gcm3 at the same temperature and pressure liquid water is densest essentially 100 gcm3 at 4 °c and begins to lose its density as the water molecules begin to form the hexagonal crystals of ice as the freezing point is reached this is due to hydrogen bonding dominating the intermolecular forces which results in a packing of molecules less compact in the solid density of ice increases slightly with decreasing temperature and has a value of 09340 gcm3 at −180 °c 93 kwhen water freezes it increases in volume about 9 for fresh water the effect of expansion during freezing can be dramatic and ice expansion is a basic cause of freezethaw weathering of rock in nature and damage to building foundations and roadways from frost heaving it is also a common cause of the flooding of houses when water pipes burst due to the pressure of expanding water when it freezes the result of this process is that ice in its most common form floats on liquid water which is an important feature in earths biosphere it has been argued that without this property natural bodies of water would freeze in some cases permanently from the bottom up resulting in a loss of bottomdependent animal and plant life in fresh and sea water sufficiently thin ice sheets allow light to pass through while protecting the underside from shortterm weather extremes such as wind chill this creates a sheltered environment for bacterial and algal colonies when sea water freezes the ice is riddled with brinefilled channels which sustain sympagic organisms such as bacteria algae copepods and annelids which in turn provide food for animals such as krill and specialised fish like the bald notothen fed upon in turn by larger animals such as emperor penguins and minke whaleswhen ice melts it absorbs as much energy as it would take to heat an equivalent mass of water by 80 °c during the melting process the temperature remains constant at 0 °c while melting any energy added breaks the hydrogen bonds between ice water molecules energy becomes available to increase the thermal energy temperature only after enough hydrogen bonds are broken that the ice can be considered liquid water the amount of energy consumed in breaking hydrogen bonds in the transition from ice to water is known as the heat of fusion as with water ice absorbs light at the red end of the spectrum preferentially as the result of an overtone of an oxygen – hydrogen o – h bond stretch compared with water this absorption is shifted toward slightly lower energies thus ice appears blue with'</li></ul> |
| 13 | <ul><li>'has offered artworks in the form of graphics downloadable to the home personal computer – for example by peter halley the thing has enabled a diverse group of artists critics curators and activists to use the internet in its early stages at its core the thing is a social network made up of individuals from diverse backgrounds with a wide range of expert knowledge from this social hub the thing has built an array of programs and initiatives in both technological and cultural networks during its first five years tt became widely recognized as one of the founding and leading online centers for new media culture its activities include hosting artists projects and mailing lists as well as publishing cultural criticism the thing has also organized many public events and symposia on such topics as the state of new media arts the preservation of online privacy artistic innovations in robotics and the possibilities of community empowerment through wireless technologies in 1997 thingnet communications llc an internet service provider isp was incorporated by wolfgang staehle gisela ehrenfried and max kossatz the isp was to provide a financial backbone for the thing inc a 501 c 3 non profit organization thingnet has hosted arts and activist groups and publications including ps1 contemporary art center artforum mabou mines willoughby sharp gallery zingmagazine journal of contemporary art rtmark and tenantnet among many others artists and projects associated with thingnet have included sawad brooks heath bunting cercle ramo nash vuk cosic ricardo dominguez ursula endlicher etoy gh hovagimyan jerome joy john klima jenny marketou mariko mori olivier mosset prema murty mark napier joseph nechvatal phil niblock daniel pflumm francesca da rimini beat streuli and beth stryker the thing amsterdam was founded by walter van der cruijsen the thing basel was founded by barbara strebel and rik gelles the thing berlin was founded by ulf schleth the thing cologne was founded by michael krome the thing dusseldorf was founded by jorg sasse the thing frankfurt was founded by andreas kallfelz the thing hamburg 1993 – 94 was founded by hansjoachim lenger the thing hamburg 2006 – 2009 was founded by the local art association the thing hamburg the thing london was founded by andreas ruethi the thing new york was founded by wolfgang staehle the thing stockholm was founded by magnus borg the thing vienna was founded by helmut mark and max kossatz the thing roma was founded by marco deseriis and giuseppe marano'</li><li>'of using locative media to better understand and connect in their environmentsyzygryd is a collaboration with three other arts organizations interpretive arson false profit labs ardent heavy industries to create a large scale interactive art piece to be unveiled at the 2010 burning man event the first five resident artists alphonzo solorzano gabriel dunne ryan alexander miles stemper and daniel massey moved into the space in july 2009 in 2010 three of these resident artists remained gabriel dunne ryan alexander and daniel massey in 2021 gray area partnered with the human rights foundation to launch the art in protest residency program the program s an opportunity for artists whose art is dedicated to promoting democracy and human rights globally to explore and expand their digital practices the gray area incubator is a peerdriven community of creators developing work at the intersection of art and technology membership is a 6month commitment though many have continued on much longer to develop their works in the incubator artists work in the disciplines of visual media arts creative code virtual augmented reality civic engagement digital activism social entrepreneurship data science sound audio and software hardware gray areas josette melchor was selected as one of the five innovators showcased on fords the edge of progress tourafter the 2016 oakland ghostship warehouse fire gray area raised approximately 13 million from over 12000 donors which it distributed to 390 applicants ranging from deceased victims next of kin displaced residents people injured in the fire as well as people who would not be acknowledged by traditional disaster relief organizations including chosen family within marginalized communities'</li><li>'nfts being used in the filmindustry include a collection of nftartworks for godzilla vs kong the release of both kevin smiths horrormovie killroy was here and the 2021 film zero contact as nfts in 2021 in april 2021 an nft was released for the score of the movie triumph composed by gregg leonard in november 2021 film director quentin tarantino released seven nfts based on uncut scenes of pulp fiction miramax subsequently filed a lawsuit claiming that their film rights were violated and that the original 1993 contract with tarantino gave them the right to mint nfts in relation to pulp fiction in august 2022 muse released album will of the people as 1000 nfts and it became the first album for which nft sales would qualify for the uk and australian chartsby february 2021 nfts accounted for us25 million of revenue generated through the sale of artwork and songs as nfts on february 28 2021 electronic dance musician 3lau sold a collection of 33 nfts for a total of us117 million to commemorate the threeyear anniversary of his ultraviolet album on march 3 2021 an nft was made to promote the kings of leon album when you see yourself other musicians who have used nfts include american rapper lil pump grimes visual artist shepard fairey in collaboration with record producer mike dean and rapper eminema paper presented at the 40th international conference on information systems in munich in 2019 suggested using nfts as tickets for different types of events this would enable organizers of the respective events or artists performing there to receive royalties on the resale of each ticket other associated files a number of internet memes have been associated with nfts which were minted and sold by their creators or by their subjects examples include doge an image of a shiba inu dog as well as charlie bit my finger nyan cat and disaster girl some virtual worlds often marketed as metaverses have incorporated nfts as a means of trading virtual items and virtual real estate some pornographic works have been sold as nfts though hostility from nft marketplaces towards pornographic material has presented significant drawbacks for creators by using nfts people engaged in this area of the entertainmentindustry are able to publish their works without thirdparty platforms being able to delete them the first credited political protest nft destruction of nazi monument symbolizing contemporary lithuania was a video filmed by professor stanislovas tomas on april 8 2019 and minted on march 29 2021 in the video tomas uses a sledgehammer to destroy a statesponsored'</li></ul> |
| 7 | <ul><li>'lot of solutions available for people with hearing impairments some examples of solutions would be blinking lights on different things like their phones alarms and things that are important to alert them cochlear implants are an option too cochlear implants are surgically placed devices that stimulate the cochlear nerve in order to help the person hear a cochlear implant is used instead of hearing aids in order to help when someone has difficulties understanding speech in a cultural context deaf culture refers to a tightknit cultural group of people whose primary language is signed and who practice social and cultural norms which are distinct from those of the surrounding hearing community this community does not automatically include all those who are clinically or legally deaf nor does it exclude every hearing person according to baker and padden it includes any person who identifies himherself as a member of the deaf community and other members accept that person as a part of the community an example being children of deaf adults with normal hearing ability it includes the set of social beliefs behaviors art literary traditions history values and shared institutions of communities that are influenced by deafness and which use sign languages as the main means of communication members of the deaf community tend to view deafness as a difference in human experience rather than a disability or diseasemany nondisabled people continue to assume that deaf people have no autonomy and fail to provide people with support beyond hearing aids which is something that must be addressed different nongovernmental organizations around the world have created programs towards closing the gap between deaf and nondisabled people in developing countries the quota international organization with headquarters in the united states provided immense educational support in the philippines where it started providing free education to deaf children in the leganes resource center for the deaf the sounds seekers british organization also provided support by offering audiology maintenance technology to better assist those who are deaf in hardtoreach places the nippon foundation also supports deaf students at gallaudet university and the national technical institute for the deaf through sponsoring international scholarships programs to encourage students to become future leaders in the deaf community the more aid these organizations give to the deaf people the more opportunities and resources disabled people must speak up about their struggles and goals that they aim to achieve when more people understand how to leverage their privilege for the marginalized groups in the community then we can build a more inclusive and tolerant environment for the generations that are yet to come the first known record of sign language in history comes from platos cratylus written in the fifth century bce in a dialogue on the correctness of names socrates says suppose'</li><li>'the ear canal external acoustic meatus external auditory meatus eam is a pathway running from the outer ear to the middle ear the adult human ear canal extends from the pinna to the eardrum and is about 25 centimetres 1 in in length and 07 centimetres 03 in in diameter the human ear canal is divided into two parts the elastic cartilage part forms the outer third of the canal its anterior and lower wall are cartilaginous whereas its superior and back wall are fibrous the cartilage is the continuation of the cartilage framework of pinna the cartilaginous portion of the ear canal contains small hairs and specialized sweat glands called apocrine glands which produce cerumen ear wax the bony part forms the inner two thirds the bony part is much shorter in children and is only a ring annulus tympanicus in the newborn the layer of epithelium encompassing the bony portion of the ear canal is much thinner and therefore more sensitive in comparison to the cartilaginous portion size and shape of the canal vary among individuals the canal is approximately 25 centimetres 1 in long and 07 centimetres 028 in in diameter it has a sigmoid form and runs from behind and above downward and forward on the crosssection it is of oval shape these are important factors to consider when fitting earplugs due to its relative exposure to the outside world the ear canal is susceptible to diseases and other disorders some disorders include atresia of the ear canal cerumen impaction bone exposure caused by the wearing away of skin in the canal auditory canal osteoma bony outgrowths of the temporal bone cholesteatoma contact dermatitis of the ear canal fungal infection otomycosis ear mites in animals ear myiasis an extremely rare infestation of maggots foreign body in ear granuloma a scar usually caused by tympanostomy tubes otitis externa swimmers ear bacteriacaused inflammation of the ear canal stenosis a gradual closing of the canal earwax also known as cerumen is a yellowish waxy substance secreted in the ear canals it plays an important role in the human ear canal assisting in cleaning and lubrication and also provides some protection from bacteria fungi and insects excess or impacted cerumen can press against the eardrum andor occlude the external auditory canal and impair hearing causing conductive hearing loss if left untreated cerumen impaction can also increase the risk of developing an infection within the ear canal list of specialized glands within the'</li><li>'##anometry and speech audiometry may be helpful testing is performed by an audiologist there is no proven or recommended treatment or cure for snhl management of hearing loss is usually by hearing strategies and hearing aids in cases of profound or total deafness a cochlear implant is a specialised hearing aid that may restore a functional level of hearing snhl is at least partially preventable by avoiding environmental noise ototoxic chemicals and drugs and head trauma and treating or inoculating against certain triggering diseases and conditions like meningitis since the inner ear is not directly accessible to instruments identification is by patient report of the symptoms and audiometric testing of those who present to their doctor with sensorineural hearing loss 90 report having diminished hearing 57 report having a plugged feeling in ear and 49 report having ringing in ear tinnitus about half report vestibular vertigo problemsfor a detailed exposition of symptoms useful for screening a selfassessment questionnaire was developed by the american academy of otolaryngology called the hearing handicap inventory for adults hhia it is a 25question survey of subjective symptoms sensorineural hearing loss may be genetic or acquired ie as a consequence of disease noise trauma etc people may have a hearing loss from birth congenital or the hearing loss may come on later many cases are related to old age agerelated hearing loss can be inherited more than 40 genes have been implicated in the cause of deafness there are 300 syndromes with related hearing loss and each syndrome may have causative genesrecessive dominant xlinked or mitochondrial genetic mutations can affect the structure or metabolism of the inner ear some may be single point mutations whereas others are due to chromosomal abnormalities some genetic causes give rise to a late onset hearing loss mitochondrial mutations can cause snhl ie m1555ag which makes the individual sensitive to the ototoxic effects of aminoglycoside antibiotics the most common cause of recessive genetic congenital hearing impairment in developed countries is dfnb1 also known as connexin 26 deafness or gjb2related deafness the most common syndromic forms of hearing impairment include dominant stickler syndrome and waardenburg syndrome and recessive pendred syndrome and usher syndrome mitochondrial mutations causing deafness are rare mttl1 mutations cause midd maternally inherited deafness and diabetes and other conditions which may include deafness as part of the picture tmprss3 gene was identified by its association with both congenital and childhood onset autosomal recessive deafness this gene is expressed in fetal co'</li></ul> |
| 23 | <ul><li>'tolerogenic dendritic cells a k a toldcs tdcs or dcregs are heterogenous pool of dendritic cells with immunosuppressive properties priming immune system into tolerogenic state against various antigens these tolerogenic effects are mostly mediated through regulation of t cells such as inducing t cell anergy t cell apoptosis and induction of tregs toldcs also affect local microenvironment toward tolerogenic state by producing antiinflammatory cytokines toldcs are not lineage specific and their immunesuppressive functions is due to their state of activation andor differentiation generally properties of all types of dendritic cells can be highly affected by local microenvironment such as presence of pro or antiinflammatory cytokines therefore tolerogenic properties of toldcs are often context dependant and can be even eventually overridden into proinflammatory phenotypetolerogenic dcs present a potential strategy for treatment of autoimmune diseases allergic diseases and transplant rejections moreover agspecific tolerance in humans can be induced in vivo via vaccination with agpulsed ex vivo generated tolerogenic dcs for that reason tolerogenic dcs are an important promising therapeutic tool dendritic cells dcs were first discovered and described in 1973 by ralph m steinman they represent a bridge between innate and adaptive immunity and play a key role in the regulation of initiation of immune responses dcs populate almost all body surfaces and they do not kill the pathogens directly they utilize and subsequently degrade antigens to peptides by their proteolytic activity after that they present these peptides in complexes together with their mhc molecules on their cell surface dcs are also the only cell type which can activate naive t cells and induce antigenspecific immune responsestherefore their role is crucially important in balance between tolerance and immune response tolerogenic dcs are essential in maintenance of central and peripheral tolerance through induction of t cell clonal deletion t cell anergy and generation and activation of regulatory t treg cells for that reason tolerogenic dcs are possible candidates for specific cellular therapy for treatment of allergic diseases autoimmune diseases eg type 1 diabetes multiple sclerosis rheumatoid arthritis or transplant rejectionstolerogenic dcs often display an immature or semimature phenotype with characteristically low expression of costimulatory eg cd80 cd86 and mhc molecules'</li><li>'distribution of il2 receptors cd25 cd122 cd132 on different cell populations resulting in different cells that are activated by high and low dose il2 in general high doses are immune suppressive while low doses can stimulate type 1 immunity lowdose il2 has been reported to reduce hepatitis c and b infectionil2 has been used in clinical trials for the treatment of chronic viral infections and as a booster adjuvant for vaccines the use of large doses of il2 given every 6 – 8 weeks in hiv therapy similar to its use in cancer therapy was found to be ineffective in preventing progression to an aids diagnosis in two large clinical trials published in 2009more recently low dose il2 has shown early success in modulating the immune system in disease like type 1 diabetes and vasculitis there are also promising studies looking to use low dose il2 in ischaemic heart disease il2 cannot accomplish its role as a promising immunotherapeutic agent due to significant drawbacks which are listed above some of the issues can be overcome using il2 ic they are composed of il2 and some of its monoclonal antibody mab and can potentiate biologic activity of il2 in vivo the main mechanism of this phenomenon in vivo is due to the prolongation of the cytokine halflife in circulation depending on the clone of il2 mab il2 ic can selectively stimulate either cd25high il2jes61 complexes or cd122high cells il2s4b6 il2s4b6 immune complexes have high stimulatory activity for nk cells and memory cd8 t cells and they could thus replace the conventional il2 in cancer immunotherapy on the other hand il2jes61 highly selectively stimulate regulatory t cells and they could be potentially useful for transplantations and in treatment of autoimmune diseases according to an immunology textbook il2 is particularly important historically as it is the first type i cytokine that was cloned the first type i cytokine for which a receptor component was cloned and was the first shortchain type i cytokine whose receptor structure was solved many general principles have been derived from studies of this cytokine including its being the first cytokine demonstrated to act in a growth factor – like fashion through specific highaffinity receptors analogous to the growth factors being studied by endocrinologists and biochemists 712 in the mid1960s studies reported activities in leukocyteconditioned media'</li><li>'the immune system during puberty and postpuberty than during the rest of a males adult life physical changes during puberty such as thymic involution also affect immunological response ecoimmunology or ecological immunology explores the relationship between the immune system of an organism and its social biotic and abiotic environment more recent ecoimmunological research has focused on host pathogen defences traditionally considered nonimmunological such as pathogen avoidance selfmedication symbiontmediated defenses and fecundity tradeoffs behavioural immunity a phrase coined by mark schaller specifically refers to psychological pathogen avoidance drivers such as disgust aroused by stimuli encountered around pathogeninfected individuals such as the smell of vomit more broadly behavioural ecological immunity has been demonstrated in multiple species for example the monarch butterfly often lays its eggs on certain toxic milkweed species when infected with parasites these toxins reduce parasite growth in the offspring of the infected monarch however when uninfected monarch butterflies are forced to feed only on these toxic plants they suffer a fitness cost as reduced lifespan relative to other uninfected monarch butterflies this indicates that laying eggs on toxic plants is a costly behaviour in monarchs which has probably evolved to reduce the severity of parasite infectionsymbiontmediated defenses are also heritable across host generations despite a nongenetic direct basis for the transmission aphids for example rely on several different symbionts for defense from key parasites and can vertically transmit their symbionts from parent to offspring therefore a symbiont that successfully confers protection from a parasite is more likely to be passed to the host offspring allowing coevolution with parasites attacking the host in a way similar to traditional immunity the preserved immune tissues of extinct species such as the thylacine thylacine cynocephalus can also provide insights into their biology the study of the interaction of the immune system with cancer cells can lead to diagnostic tests and therapies with which to find and fight cancer the immunology concerned with physiological reaction characteristic of the immune state this area of the immunology is devoted to the study of immunological aspects of the reproductive process including fetus acceptance the term has also been used by fertility clinics to address fertility problems recurrent miscarriages premature deliveries and dangerous complications such as preeclampsia list of immunologists immunomics international reviews of immunology outline of immunology history of immunology osteoimmunology'</li></ul> |
| 25 | <ul><li>'then convergence to i − a − 1 b displaystyle ia1b occurs if the magnitudes of all eigenvalues of a displaystyle a are less than 1 every bounded sequence in r n displaystyle mathbb r n has a convergent subsequence by the bolzano – weierstrass theorem if these all have the same limit then the original sequence converges to that limit if it can be shown that all of the subsequences of f displaystyle f have the same limit such as by showing that there is a unique fixed point of the transformation t displaystyle t then the initial sequence must also converge to that limit every bounded monotonic sequence in r n displaystyle mathbb r n converges to a limit this approach can also be applied to sequences that are not monotonic instead it is possible to define a function v r n → r displaystyle vmathbb r nrightarrow mathbb r such that v f n displaystyle vfn is monotonic in n displaystyle n if the v displaystyle v satisfies the conditions to be a lyapunov function then f displaystyle f is convergent lyapunovs theorem is normally stated for ordinary differential equations but can also be applied to sequences of iterates by replacing derivatives with discrete differences the basic requirements on v displaystyle v are that v f n 1 − v f n 0 displaystyle vfn1vfn0 for f n = 0 displaystyle fnneq 0 and v 0 0 displaystyle v00 or v [UNK] x 0 displaystyle dot vx0 for x = 0 displaystyle xneq 0 v x 0 displaystyle vx0 for all x = 0 displaystyle xneq 0 and v 0 0 displaystyle v00 v displaystyle v be radially unbounded so that v x displaystyle vx goes to infinity for any sequence with ‖ x ‖ displaystyle x that tends to infinityin many cases a lyapunov function of the form v x x t a x displaystyle vxxtax can be found although more complex forms are also used for delay differential equations a similar approach applies with lyapunov functions replaced by lyapunov functionals also called lyapunovkrasovskii functionals if the inequality in the condition 1 is weak lasalles invariance principle may be used to consider the convergence of sequences of functions it is necessary to define a distance between functions to replace the euclidean norm these often include convergence in the'</li><li>'this is a list of convexity topics by wikipedia page alpha blending the process of combining a translucent foreground color with a background color thereby producing a new blended color this is a convex combination of two colors allowing for transparency effects in computer graphics barycentric coordinates a coordinate system in which the location of a point of a simplex a triangle tetrahedron etc is specified as the center of mass or barycenter of masses placed at its vertices the coordinates are nonnegative for points in the convex hull borsuks conjecture a conjecture about the number of pieces required to cover a body with a larger diameter solved by hadwiger for the case of smooth convex bodies bond convexity a measure of the nonlinear relationship between price and yield duration of a bond to changes in interest rates the second derivative of the price of the bond with respect to interest rates a basic form of convexity in finance caratheodorys theorem convex hull if a point x of rd lies in the convex hull of a set p there is a subset of p with d1 or fewer points such that x lies in its convex hull choquet theory an area of functional analysis and convex analysis concerned with measures with support on the extreme points of a convex set c roughly speaking all vectors of c should appear as averages of extreme points complex convexity — extends the notion of convexity to complex numbers convex analysis the branch of mathematics devoted to the study of properties of convex functions and convex sets often with applications in convex minimization convex combination a linear combination of points where all coefficients are nonnegative and sum to 1 all convex combinations are within the convex hull of the given points convex and concave a print by escher in which many of the structures features can be seen as both convex shapes and concave impressions convex body a compact convex set in a euclidean space whose interior is nonempty convex conjugate a dual of a real functional in a vector space can be interpreted as an encoding of the convex hull of the functions epigraph in terms of its supporting hyperplanes convex curve a plane curve that lies entirely on one side of each of its supporting lines the interior of a closed convex curve is a convex set convex function a function in which the line segment between any two points on the graph of the function lies above the graph closed convex function a convex function all of whose sublevel sets are closed sets proper convex function a convex function whose effective domain is nonempty and it never attains minus infinity concave function the negative of a convex function convex geometry the branch of geometry studying'</li><li>'##regularization is useful as it can often be used in a way such that the various symmetries of the physical system are preserved zetafunction regularization is used in conformal field theory renormalization and in fixing the critical spacetime dimension of string theory zeta function regularization is equivalent to dimensional regularization see4 however the main advantage of the zeta regularization is that it can be used whenever the dimensional regularization fails for example if there are matrices or tensors inside the calculations [UNK] i j k displaystyle epsilon ijk zetafunction regularization gives an analytic structure to any sums over an arithmetic function fn such sums are known as dirichlet series the regularized form f s [UNK] n 1 ∞ f n n − s displaystyle tilde fssum n1infty fnns converts divergences of the sum into simple poles on the complex splane in numerical calculations the zetafunction regularization is inappropriate as it is extremely slow to converge for numerical purposes a more rapidly converging sum is the exponential regularization given by f t [UNK] n 1 ∞ f n e − t n displaystyle ftsum n1infty fnetn this is sometimes called the ztransform of f where z exp−t the analytic structure of the exponential and zetaregularizations are related by expanding the exponential sum as a laurent series f t a n t n a n − 1 t n − 1 [UNK] displaystyle ftfrac antnfrac an1tn1cdots one finds that the zetaseries has the structure f s a n s − n [UNK] displaystyle tilde fsfrac ansncdots the structure of the exponential and zetaregulators are related by means of the mellin transform the one may be converted to the other by making use of the integral representation of the gamma function γ s [UNK] 0 ∞ t s − 1 e − t d t displaystyle gamma sint 0infty ts1etdt which leads to the identity γ s f s [UNK] 0 ∞ t s − 1 f t d t displaystyle gamma stilde fsint 0infty ts1ftdt relating the exponential and zetaregulators and converting poles in the splane to divergent terms in the laurent series the sum f s [UNK] n a n e − s ω n displaystyle fssum nanesomega n is sometimes called a heat kernel or a heatkernel regularized sum this name stems from the idea that the ω n'</li></ul> |
| 37 | <ul><li>'##dicative adjective must also be connected by a copula some theories of syntax adopt a subjectpredicate distinction for instance a textbook phrase structure grammar typically divides an english declarative sentence s into a noun phrase np and verb phrase vp the subject np is shown in green and the predicate vp in blue languages with more flexible word order often called nonconfigurational languages are often also treated differently in phrase structure approaches on the other hand dependency grammar rejects the binary subjectpredicate division and places the finite verb as the root of the sentence the matrix predicate is marked in blue and its two arguments are in green while the predicate cannot be construed as a constituent in the formal sense it is a catena barring a discontinuity predicates and their arguments are always catenae in dependency structures some theories of grammar accept both a binary division of sentences into subject and predicate while also giving the head of the predicate a special status in such contexts the term predicator is used to refer to that head there are cases in which the semantic predicand has a syntactic function other than subject this happens in raising constructions such as the following here you is the object of the make verb phrase the head of the main clause but it is also the predicand of the subordinate think clause which has no subject 329 – 335 the term predicate is also used to refer to properties and to words or phrases which denote them this usage of the term comes from the concept of a predicate in logic in logic predicates are symbols which are interpreted as relations or functions over arguments in semantics the denotations of some linguistic expressions are analyzed along similar lines expressions which denote predicates in the semantic sense are sometimes themselves referred to as predication the seminal work of greg carlson distinguishes between types of predicates based on carlsons work predicates have been divided into the following subclasses which roughly pertain to how a predicate relates to its subject stagelevel predicates a stagelevel predicate is true of a temporal stage of its subject for example if john is hungry then he typically will eat some food his state of being hungry therefore lasts a certain amount of time and not his entire lifespan stagelevel predicates can occur in a wide range of grammatical constructions and are probably the most versatile kind of predicate individuallevel predicates an individuallevel predicate is true throughout the existence of an individual for example if john is smart this is a property that he has regardless of which particular point'</li><li>'that there can be exactly the same relation between two completely different objects greek philosophers such as plato and aristotle used a wider notion of analogy they saw analogy as a shared abstraction analogous objects did not share necessarily a relation but also an idea a pattern a regularity an attribute an effect or a philosophy these authors also accepted that comparisons metaphors and images allegories could be used as arguments and sometimes they called them analogies analogies should also make those abstractions easier to understand and give confidence to those who use them james francis ross in portraying analogy 1982 the first substantive examination of the topic since cajetans de nominum analogia demonstrated that analogy is a systematic and universal feature of natural languages with identifiable and lawlike characteristics which explain how the meanings of words in a sentence are interdependent on the contrary ibn taymiyya francis bacon and later john stuart mill argued that analogy is simply a special case of induction in their view analogy is an inductive inference from common known attributes to another probable common attribute which is known about only in the source of the analogy in the following form premises a is c d e f g b is c d e f conclusion b is probably g contemporary cognitive scientists use a wide notion of analogy extensionally close to that of plato and aristotle but framed by gentners 1983 structure mapping theory the same idea of mapping between source and target is used by conceptual metaphor and conceptual blending theorists structure mapping theory concerns both psychology and computer science according to this view analogy depends on the mapping or alignment of the elements of source and target the mapping takes place not only between objects but also between relations of objects and between relations of relations the whole mapping yields the assignment of a predicate or a relation to the target structure mapping theory has been applied and has found considerable confirmation in psychology it has had reasonable success in computer science and artificial intelligence see below some studies extended the approach to specific subjects such as metaphor and similarity logicians analyze how analogical reasoning is used in arguments from analogy an analogy can be stated using is to and as when representing the analogous relationship between two pairs of expressions for example smile is to mouth as wink is to eye in the field of mathematics and logic this can be formalized with colon notation to represent the relationships using single colon for ratio and double colon for equalityin the field of testing the colon notation of ratios and equality is often borrowed so that the example above might be rendered smile mouth wink eye and pronounced the same way an analogy can be the linguistic process that reduces word forms thought to break rules to more common forms that follow these rules for example'</li><li>'this approach can be used to cover a wide variety of semantic phenomena a lambek grammar is an elaboration of this idea that has a concatenation operator for types and several other inference rules mati pentus has shown that these still have the generative capacity of contextfree grammars for the lambek calculus there is a type concatenation operator [UNK] displaystyle star so that prim ⊆ tp prim displaystyle textprimsubseteq texttptextprim and if x y ∈ tp prim displaystyle xyin texttptextprim then x y x [UNK] y x [UNK] y ∈ tp prim displaystyle xyxbackslash yxstar yin texttptextprim the lambek calculus consists of several deduction rules which specify how type inclusion assertions can be derived in the following rules upper case roman letters stand for types upper case greek letters stand for sequences of types a sequent of the form x ← γ displaystyle xleftarrow gamma can be read a string is of type x if it consists of the concatenation of strings of each of the types in γ if a type is interpreted as a set of strings then the ← may be interpreted as [UNK] that is includes as a subset a horizontal line means that the inclusion above the line implies the one below the line the process is begun by the axiom rule which has no antecedents and just says that any type includes itself axiom x ← x displaystyle textaxiomquad over xleftarrow x the cut rule says that inclusions can be composed cut z ← δ x δ ′ x ← γ z ← δ γ δ ′ displaystyle textcutquad zleftarrow delta xdelta qquad xleftarrow gamma over zleftarrow delta gamma delta the other rules come in pairs one pair for each type construction operator each pair consisting of one rule for the operator in the target one in the source of the arrow the name of a rule consists of the operator and an arrow with the operator on the side of the arrow on which it occurs in the conclusion for an example here is a derivation of type raising which says that b a [UNK] b ← a displaystyle babackslash bleftarrow a the names of rules and the substitutions used are to the right b ← b a ← a b ← b a a b a [UNK] b ← a axioms ← z y b x a γ a δ δ ′ [UNK] ← y b x b a γ a displaystyle dfra'</li></ul> |
| 30 | <ul><li>'on february 5 2005 for its operations of a vermiculite mine in libby montana the indictment accused grace of wire fraud knowing endangerment of residents by concealing air monitoring results obstruction of justice by interfering with an environmental protection agency epa investigation violation of the clean air act providing asbestos materials to schools and local residents and conspiracy to release asbestos and cover up health problems from asbestos contamination the department of justice said 1200 residents had developed asbestosrelated diseases and some had died and there could be many more injuries and deathson june 8 2006 a federal judge dismissed the conspiracy charge of knowing endangerment because some of the defendant officials had left the company before the fiveyear statute of limitations had begun to run the wire fraud charge was dropped by prosecutors in march other prosecutions on april 2 1998 three men were indicted in a conspiracy to use homeless men for illegal asbestos removal from an aging wisconsin manufacturing plant thenus attorney general janet reno said knowingly removing asbestos improperly is criminal exploiting the homeless to do this work is cruelon december 12 2004 owners of new york asbestos abatement companies were sentenced to the longest federal jail sentences for environmental crimes in us history after they were convicted on 18 counts of conspiracy to violate the clean air act and the toxic substances control act and actual violations of the clean air act and racketeerinfluenced and corrupt organizations act the crimes involved a 10year scheme to illegally remove asbestos the rico counts included obstruction of justice money laundering mail fraud and bid rigging all related to the asbestos cleanupon january 11 2006 san diego gas electric co two of its employees and a contractor were indicted by a federal grand jury on charges that they violated safety standards while removing asbestos from pipes in lemon grove california the defendants were charged with five counts of conspiracy violating asbestos work practice standards and making false statements'</li><li>'is standard in medicalbilling terminology especially when billing for a growth whose pathology has yet to be determined epidemiology of cancer list of biological development disorders pleomorphism somatic evolution in cancer'</li><li>'atm these epigenetic defects occurred in various cancers including breast ovarian colorectal and head and neck cancers two or three deficiencies in expression of ercc1 xpf or pms2 occur simultaneously in the majority of the 49 colon cancers evaluated by facista et al epigenetic alterations causing reduced expression of dna repair genes is shown in a central box at the third level from the top of the figure in this section and the consequent dna repair deficiency is shown at the fourth level when expression of dna repair genes is reduced dna damages accumulate in cells at a higher than normal level and these excess damages cause increased frequencies of mutation or epimutation mutation rates strongly increase in cells defective in dna mismatch repair or in homologous recombinational repair hrrduring repair of dna double strand breaks or repair of other dna damages incompletely cleared sites of repair can cause epigenetic gene silencing dna repair deficiencies level 4 in the figure cause increased dna damages level 5 in the figure which result in increased somatic mutations and epigenetic alterations level 6 in the figure field defects normalappearing tissue with multiple alterations and discussed in the section below are common precursors to development of the disordered and improperly proliferating clone of tissue in a malignant neoplasm such field defects second level from bottom of figure may have multiple mutations and epigenetic alterations once a cancer is formed it usually has genome instability this instability is likely due to reduced dna repair or excessive dna damage because of such instability the cancer continues to evolve and to produce sub clones for example a renal cancer sampled in 9 areas had 40 ubiquitous mutations demonstrating tumor heterogeneity ie present in all areas of the cancer 59 mutations shared by some but not all areas and 29 private mutations only present in one of the areas of the cancer various other terms have been used to describe this phenomenon including field effect field cancerization and field carcinogenesis the term field cancerization was first used in 1953 to describe an area or field of epithelium that has been preconditioned by at that time largely unknown processes so as to predispose it towards development of cancer since then the terms field cancerization and field defect have been used to describe premalignant tissue in which new cancers are likely to arisefield defects are important in progression to cancer however in most cancer research as pointed out by rubin the vast majority of studies in cancer research has been done on welldefined tumors in vivo or on discrete neoplastic foci in vitro'</li></ul> |
| 2 | <ul><li>'in algebra a resolvent cubic is one of several distinct although related cubic polynomials defined from a monic polynomial of degree four p x x 4 a 3 x 3 a 2 x 2 a 1 x a 0 displaystyle pxx4a3x3a2x2a1xa0 in each case the coefficients of the resolvent cubic can be obtained from the coefficients of px using only sums subtractions and multiplications knowing the roots of the resolvent cubic of px is useful for finding the roots of px itself hence the name “ resolvent cubic ” the polynomial px has a multiple root if and only if its resolvent cubic has a multiple root suppose that the coefficients of px belong to a field k whose characteristic is different from 2 in other words we are working in a field in which 1 1 = 0 whenever roots of px are mentioned they belong to some extension k of k such that px factors into linear factors in kx if k is the field q of rational numbers then k can be the field c of complex numbers or the field q of algebraic numbers in some cases the concept of resolvent cubic is defined only when px is a quartic in depressed form — that is when a3 0 note that the fourth and fifth definitions below also make sense and that the relationship between these resolvent cubics and px are still valid if the characteristic of k is equal to 2 suppose that px is a depressed quartic — that is that a3 0 a possible definition of the resolvent cubic of px is r 1 y 8 y 3 8 a 2 y 2 2 a 2 2 − 8 a 0 y − a 1 2 displaystyle r1y8y38a2y22a228a0ya12 the origin of this definition lies in applying ferraris method to find the roots of px to be more precise p x 0 [UNK] x 4 a 2 x 2 − a 1 x − a 0 [UNK] x 2 a 2 2 2 − a 1 x − a 0 a 2 2 4 displaystyle beginalignedpx0longleftrightarrow x4a2x2a1xa0longleftrightarrow leftx2frac a22right2a1xa0frac a224endaligned add a new unknown y to x2 a22 now you have x 2 a 2 2 y 2 − a 1 x − a 0 a 2 2 4 2 x 2 y a 2 y y 2 2 y x 2 − a 1 x − a'</li><li>'in particular in characteristic zero all complex solutions are sought searching for the real or rational solutions are much more difficult problems that are not considered in this article the set of solutions is not always finite for example the solutions of the system x x − 1 0 x y − 1 0 displaystyle beginalignedxx10xy10endaligned are a point xy 11 and a line x 0 even when the solution set is finite there is in general no closedform expression of the solutions in the case of a single equation this is abel – ruffini theorem the barth surface shown in the figure is the geometric representation of the solutions of a polynomial system reduced to a single equation of degree 6 in 3 variables some of its numerous singular points are visible on the image they are the solutions of a system of 4 equations of degree 5 in 3 variables such an overdetermined system has no solution in general that is if the coefficients are not specific if it has a finite number of solutions this number is at most 53 125 by bezouts theorem however it has been shown that for the case of the singular points of a surface of degree 6 the maximum number of solutions is 65 and is reached by the barth surface a system is overdetermined if the number of equations is higher than the number of variables a system is inconsistent if it has no complex solution or if the coefficients are not complex numbers no solution in an algebraically closed field containing the coefficients by hilberts nullstellensatz this means that 1 is a linear combination with polynomials as coefficients of the first members of the equations most but not all overdetermined systems when constructed with random coefficients are inconsistent for example the system x3 – 1 0 x2 – 1 0 is overdetermined having two equations but only one unknown but it is not inconsistent since it has the solution x 1 a system is underdetermined if the number of equations is lower than the number of the variables an underdetermined system is either inconsistent or has infinitely many complex solutions or solutions in an algebraically closed field that contains the coefficients of the equations this is a nontrivial result of commutative algebra that involves in particular hilberts nullstellensatz and krulls principal ideal theorem a system is zerodimensional if it has a finite number of complex solutions or solutions in an algebraically closed field this terminology comes from the fact that the algebraic variety of the solutions has dimension zero a system with infinitely many solutions is said to be positivedimensional a zerodimensional system with as'</li><li>'##gu endif endwhile return factors the correctness of this algorithm relies on the fact that the ring fqxf is a direct product of the fields fqxfi where fi runs on the irreducible factors of f as all these fields have qd elements the component of g in any of these fields is zero with probability q d − 1 2 q d [UNK] 1 2 displaystyle frac qd12qdsim tfrac 12 this implies that the polynomial gcdg u is the product of the factors of g for which the component of g is zero it has been shown that the average number of iterations of the while loop of the algorithm is less than 25 log 2 r displaystyle 25log 2r giving an average number of arithmetic operations in fq which is o d n 2 log r log q displaystyle odn2logrlogq in the typical case where dlogq n this complexity may be reduced to o n 2 log r log q n displaystyle on2logrlogqn by choosing h in the kernel of the linear map v → v q − v mod f displaystyle vto vqvpmod f and replacing the instruction g h q d − 1 2 − 1 mod f displaystyle ghfrac qd121pmod f by g h q − 1 2 − 1 mod f displaystyle ghfrac q121pmod f the proof of validity is the same as above replacing the direct product of the fields fqxfi by the direct product of their subfields with q elements the complexity is decomposed in o n 2 log r log q displaystyle on2logrlogq for the algorithm itself o n 2 log q n displaystyle on2logqn for the computation of the matrix of the linear map which may be already computed in the squarefree factorization and on3 for computing its kernel it may be noted that this algorithm works also if the factors have not the same degree in this case the number r of factors needed for stopping the while loop is found as the dimension of the kernel nevertheless the complexity is slightly better if squarefree factorization is done before using this algorithm as n may decrease with squarefree factorization this reduces the complexity of the critical steps victor shoups algorithm like the algorithms of the preceding section victor shoups algorithm is an equaldegree factorization algorithm unlike them it is a deterministic algorithm however it is less efficient in practice than the algorithms of preceding section for shoups algorithm the input is restricted'</li></ul> |
| 0 | <ul><li>'occupational noise is the amount of acoustic energy received by an employees auditory system when they are working in the industry occupational noise or industrial noise is often a term used in occupational safety and health as sustained exposure can cause permanent hearing damage occupational noise is considered an occupational hazard traditionally linked to loud industries such as shipbuilding mining railroad work welding and construction but can be present in any workplace where hazardous noise is present in the us the national institute for occupational safety and health niosh and the occupational safety and health administration osha work together to provide standards and regulations for noise in the workplacenational institute for occupational safety and health niosh occupational safety and health administration osha mine safety and health administration msha federal railroad administration fra have all set standards on hazardous occupational noise in their respective industries each industry is different as workers tasks and equipment differ but most regulations agree that noise becomes hazardous when it exceeds 85 decibels for an 8hour time exposure typical work shift this relationship between allotted noise level and exposure time is known as an exposure action value eav or permissible exposure limit pel the eav or pel can be seen as equations which manipulate the allotted exposure time according to the intensity of the industrial noise this equation works as an inverse exponential relationship as the industrial noise intensity increases the allotted exposure time to still remain safe decreases thus a worker exposed to a noise level of 100 decibels for 15 minutes would be at the same risk level as a worker exposed to 85 decibels for 8 hours using this mathematical relationship an employer can calculate whether or not their employees are being overexposed to noise when it is suspected that an employee will reach or exceed the pel a monitoring program for that employee should be implemented by the employerthe above calculations of pel and eav are based on measurements taken to determine the intensity of that particular industrial noise aweighted measurements are commonly used to determine noise levels that can cause harm to the human ear there are also special exposure meters available that integrate noise over a period of time to give an leq value equivalent sound pressure level defined by standards these numerical values do not fully reflect the real situation for example the osha standard sets the action level 85 dba and the pel 90 dba but in practice the compliance safety and health officer must record the excess of these values with a margin in order to take into account the potential measurement error and instead of pel 90 dba it turns out 92 dba and instead of al 85 dba its 87 dba occupational noise if experienced repeatedly at high intensity for an extended period of time can cause noiseinduce'</li><li>'the lowest frequency which can be localized depends on the ear distance animals with a greater ear distance can localize lower frequencies than humans can for animals with a smaller ear distance the lowest localizable frequency is higher than for humans if the ears are located at the side of the head interaural level differences appear for higher frequencies and can be evaluated for localization tasks for animals with ears at the top of the head no shadowing by the head will appear and therefore there will be much less interaural level differences which could be evaluated many of these animals can move their ears and these ear movements can be used as a lateral localization cue for many mammals there are also pronounced structures in the pinna near the entry of the ear canal as a consequence directiondependent resonances can appear which could be used as an additional localization cue similar to the localization in the median plane in the human auditory system there are additional localization cues which are also used by animals for sound localization in the median plane elevation of the sound also two detectors can be used which are positioned at different heights in animals however rough elevation information is gained simply by tilting the head provided that the sound lasts long enough to complete the movement this explains the innate behavior of cocking the head to one side when trying to localize a sound precisely to get instantaneous localization in more than two dimensions from timedifference or amplitudedifference cues requires more than two detectors the tiny parasitic fly ormia ochracea has become a model organism in sound localization experiments because of its unique ear the animal is too small for the time difference of sound arriving at the two ears to be calculated in the usual way yet it can determine the direction of sound sources with exquisite precision the tympanic membranes of opposite ears are directly connected mechanically allowing resolution of submicrosecond time differences and requiring a new neural coding strategy ho showed that the coupledeardrum system in frogs can produce increased interaural vibration disparities when only small arrival time and sound level differences were available to the animals head efforts to build directional microphones based on the coupledeardrum structure are underway most owls are nocturnal or crepuscular birds of prey because they hunt at night they must rely on nonvisual senses experiments by roger payne have shown that owls are sensitive to the sounds made by their prey not the heat or the smell in fact the sound cues are both necessary and sufficient for localization of mice from a distant location where they are perched for this to work the owls must be able to accurately localize both'</li><li>'##benmelodie in rock music from the late 1960s to the 2000s the timbre of specific sounds is important to a song for example in heavy metal music the sonic impact of the heavily amplified heavily distorted power chord played on electric guitar through very loud guitar amplifiers and rows of speaker cabinets is an essential part of the styles musical identity often listeners can identify an instrument even at different pitches and loudness in different environments and with different players in the case of the clarinet acoustic analysis shows waveforms irregular enough to suggest three instruments rather than one david luce suggests that this implies that certain strong regularities in the acoustic waveform of the above instruments must exist which are invariant with respect to the above variables however robert erickson argues that there are few regularities and they do not explain our powers of recognition and identification he suggests borrowing the concept of subjective constancy from studies of vision and visual perceptionpsychoacoustic experiments from the 1960s onwards tried to elucidate the nature of timbre one method involves playing pairs of sounds to listeners then using a multidimensional scaling algorithm to aggregate their dissimilarity judgments into a timbre space the most consistent outcomes from such experiments are that brightness or spectral energy distribution and the bite or rate and synchronicity and rise time of the attack are important factors the concept of tristimulus originates in the world of color describing the way three primary colors can be mixed together to create a given color by analogy the musical tristimulus measures the mixture of harmonics in a given sound grouped into three sections it is basically a proposal of reducing a huge number of sound partials that can amount to dozens or hundreds in some cases down to only three values the first tristimulus measures the relative weight of the first harmonic the second tristimulus measures the relative weight of the second third and fourth harmonics taken together and the third tristimulus measures the relative weight of all the remaining harmonics t 1 a 1 [UNK] h 1 h a h t 2 a 2 a 3 a 4 [UNK] h 1 h a h t 3 [UNK] h 5 h a h [UNK] h 1 h a h displaystyle t1frac a1sum h1hahqquad t2frac a2a3a4sum h1hahqquad t3frac sum h5hahsum h1hah however more evidence studies and applications would be needed regarding this type of representation in order to validate it the term brightness is also used in discussions of sound timbres in a rough analogy'</li></ul> |
| 39 | <ul><li>'waste heat is heat that is produced by a machine or other process that uses energy as a byproduct of doing work all such processes give off some waste heat as a fundamental result of the laws of thermodynamics waste heat has lower utility or in thermodynamics lexicon a lower exergy or higher entropy than the original energy source sources of waste heat include all manner of human activities natural systems and all organisms for example incandescent light bulbs get hot a refrigerator warms the room air a building gets hot during peak hours an internal combustion engine generates hightemperature exhaust gases and electronic components get warm when in operation instead of being wasted by release into the ambient environment sometimes waste heat or cold can be used by another process such as using hot engine coolant to heat a vehicle or a portion of heat that would otherwise be wasted can be reused in the same process if makeup heat is added to the system as with heat recovery ventilation in a building thermal energy storage which includes technologies both for short and longterm retention of heat or cold can create or improve the utility of waste heat or cold one example is waste heat from air conditioning machinery stored in a buffer tank to aid in night time heating another is seasonal thermal energy storage stes at a foundry in sweden the heat is stored in the bedrock surrounding a cluster of heat exchanger equipped boreholes and is used for space heating in an adjacent factory as needed even months later an example of using stes to use natural waste heat is the drake landing solar community in alberta canada which by using a cluster of boreholes in bedrock for interseasonal heat storage obtains 97 percent of its yearround heat from solar thermal collectors on the garage roofs another stes application is storing winter cold underground for summer air conditioningon a biological scale all organisms reject waste heat as part of their metabolic processes and will die if the ambient temperature is too high to allow this anthropogenic waste heat can contribute to the urban heat island effect the biggest point sources of waste heat originate from machines such as electrical generators or industrial processes such as steel or glass production and heat loss through building envelopes the burning of transport fuels is a major contribution to waste heat machines converting energy contained in fuels to mechanical work or electric energy produce heat as a byproduct in the majority of energy applications energy is required in multiple forms these energy forms typically include some combination of heating ventilation and air conditioning mechanical energy and electric power often these additional forms of energy are produced by a heat engine running on a source of hightemperat'</li><li>'boundaries at the flow extremes for a particular speed which are caused by different phenomena the steepness of the high flow part of a constant speed line is due to the effects of compressibility the position of the other end of the line is located by blade or passage flow separation there is a welldefined lowflow boundary marked on the map as a stall or surge line at which blade stall occurs due to positive incidence separation not marked as such on maps for turbochargers and gas turbine engines is a more gradually approached highflow boundary at which passages choke when the gas velocity reaches the speed of sound this boundary is identified for industrial compressors as overload choke sonic or stonewall the approach to this flow limit is indicated by the speed lines becoming more vertical other areas of the map are regions where fluctuating vane stalling may interact with blade structural modes leading to failure ie rotating stall causing metal fatigue different applications move over their particular map along different paths an example map with no operating lines is shown as a pictorial reference with the stallsurge line on the left and the steepening speed lines towards choke and overload on the right maps have similar features and general shape because they all apply to machines with spinning vanes which use similar principles for pumping a compressible fluid not all machines have stationary vanes centrifugal compressors may have either vaned or vaneless diffusers however a compressor operating as part of a gas turbine or turbocharged engine behaves differently to an industrial compressor because its flow and pressure characteristics have to match those of its driving turbine and other engine components such as power turbine or jet nozzle for a gas turbine and for a turbocharger the engine airflow which depends on engine speed and charge pressure a link between a gas turbine compressor and its engine can be shown with lines of constant engine temperature ratio ie the effect of fuellingincreased turbine temperature which raises the running line as the temperature ratio increases one manifestation of different behaviour appears in the choke region on the righthand side of a map it is a noload condition in a gas turbine turbocharger or industrial axial compressor but overload in an industrial centrifugal compressor hiereth et al shows a turbocharger compressor fullload or maximum fuelling curve runs up close to the surge line a gas turbine compressor fullload line also runs close to the surge line the industrial compressor overload is a capacity limit and requires high power levels to pass the high flow rates required excess power is available to inadvertently take the compressor beyond the overload limit to a hazardous condition'</li><li>'quantity thus it is useful to derive relationships between μ j t displaystyle mu mathrm jt and other more conveniently measured quantities as described below the first step in obtaining these results is to note that the joule – thomson coefficient involves the three variables t p and h a useful result is immediately obtained by applying the cyclic rule in terms of these three variables that rule may be written ∂ t ∂ p h ∂ h ∂ t p ∂ p ∂ h t − 1 displaystyle leftfrac partial tpartial prighthleftfrac partial hpartial trightpleftfrac partial ppartial hrightt1 each of the three partial derivatives in this expression has a specific meaning the first is μ j t displaystyle mu mathrm jt the second is the constant pressure heat capacity c p displaystyle cmathrm p defined by c p ∂ h ∂ t p displaystyle cmathrm p leftfrac partial hpartial trightp and the third is the inverse of the isothermal joule – thomson coefficient μ t displaystyle mu mathrm t defined by μ t ∂ h ∂ p t displaystyle mu mathrm t leftfrac partial hpartial prightt this last quantity is more easily measured than μ j t displaystyle mu mathrm jt thus the expression from the cyclic rule becomes μ j t − μ t c p displaystyle mu mathrm jt frac mu mathrm t cp this equation can be used to obtain joule – thomson coefficients from the more easily measured isothermal joule – thomson coefficient it is used in the following to obtain a mathematical expression for the joule – thomson coefficient in terms of the volumetric properties of a fluid to proceed further the starting point is the fundamental equation of thermodynamics in terms of enthalpy this is d h t d s v d p displaystyle mathrm d htmathrm d svmathrm d p now dividing through by dp while holding temperature constant yields ∂ h ∂ p t t ∂ s ∂ p t v displaystyle leftfrac partial hpartial prightttleftfrac partial spartial prighttv the partial derivative on the left is the isothermal joule – thomson coefficient μ t displaystyle mu mathrm t and the one on the right can be expressed in terms of the coefficient of thermal expansion via a maxwell relation the appropriate relation is ∂ s ∂ p t − ∂ v ∂ t p − v α displaystyle leftfrac partial spartial prighttleftfrac partial'</li></ul> |
| 21 | <ul><li>'##agate this type of plant this means that the characteristics of a determined cultivar remain unalteredbulbs can reproduce vegetatively in a number of ways depending on the type of storage organ the plant has bulbs can be evergreen such as clivia agapanthus and some species and varieties of iris and hemerocallis however the majority are deciduous dying down to the storage organ for part of the year this characteristic has been taken advantage of in the commercialization of these plants at the beginning of the rest period the bulbs can be dug out of the ground and prepared for sale as if they remain dry they do not need any nutrition for weeks or monthsbulbous plants are produced on an industrial scale for two main markets cut flowers and dried bulbs the bulbs are produced to satisfy the demand for bulbs for parks gardens and as house plants in addition to providing the bulbs necessary for the production of cut flowers the international trade in cut flowers has a worldwide value of approximately 11000 million euros which gives an idea of the economic importance of this activity the netherlands has been the leader in commercial production since the start of the 16th century both for the dried bulb market and for cut flowers in fact with approximately 30000 hectares dedicated to this activity the production of bulbs in the netherlands represents 65 of global production the netherlands also produces 95 of the international market in bulbs dedicated to the production of cut flowers the united states is the second largest producer followed by france japan italy united kingdom israel brazil and spain international bulb society httpwwwbulbsocietyorgestablished in 1933 this society is an international educational and scientific organization it is a charity dedicated to the dissemination of information regarding the cultivation conservation and botany of all types of bulbous plants their website contains an excellent gallery of high quality photographs of bulbous plantsthe pacific bulb society httpwwwpacificbulbsocietyorgorganized in 2002 this society disseminates information and shares experiences regarding the cultivation of ornamental bulbous plants their website contains an exceptional educational resource pacific bulb society wiki with images and information regarding numerous species of bulbous plantsaustralian bulb association httpswebarchiveorgweb20090518011847httpwwwausbulbsorgindexhtmorganized in 2001 it possessed an excellent collection of photographs of bulbous plants on its website list of flower bulbs hessayon dg 1999 the bulb expert london transworld publishers mathew brian 1978 the larger bulbs london bt batsford in association with the royal horticultural society isbn 9780'</li><li>'soil conservation is the prevention of loss of the topmost layer of the soil from erosion or prevention of reduced fertility caused by over usage acidification salinization or other chemical soil contamination slashandburn and other unsustainable methods of subsistence farming are practiced in some lesser developed areas a consequence of deforestation is typically largescale erosion loss of soil nutrients and sometimes total desertification techniques for improved soil conservation include crop rotation cover crops conservation tillage and planted windbreaks affect both erosion and fertility when plants die they decay and become part of the soil code 330 defines standard methods recommended by the us natural resources conservation service farmers have practiced soil conservation for millennia in europe policies such as the common agricultural policy are targeting the application of best management practices such as reduced tillage winter cover crops plant residues and grass margins in order to better address soil conservation political and economic action is further required to solve the erosion problem a simple governance hurdle concerns how we value the land and this can be changed by cultural adaptation soil carbon is a carbon sink playing a role in climate change mitigation contour ploughing orients furrows following the contour lines of the farmed area furrows move left and right to maintain a constant altitude which reduces runoff contour plowing was practiced by the ancient phoenicians for slopes between two and ten percent contour plowing can increase crop yields from 10 to 50 percent partially as a result of greater soil retention terracing is the practice of creating nearly level areas in a hillside area the terraces form a series of steps each at a higher level than the previous terraces are protected from erosion by other soil barriers terraced farming is more common on small farms keyline design is the enhancement of contour farming where the total watershed properties are taken into account in forming the contour lines tree shrubs and groundcover are effective perimeter treatment for soil erosion prevention by impeding surface flows a special form of this perimeter or interrow treatment is the use of a grass way that both channels and dissipates runoff through surface friction impeding surface runoff and encouraging infiltration of the slowed surface water windbreaks are sufficiently dense rows of trees at the windward exposure of an agricultural field subject to wind erosion evergreen species provide yearround protection however as long as foliage is present in the seasons of bare soil surfaces the effect of deciduous trees may be adequate cover crops such as nitrogenfixing legumes white turnips radishes and other species are rotated with cash crops to blanket the soil yearround and act as green manure that rep'</li><li>'blackberries are also cultivated in the same way in a tropical climate temperatures are prone to soar above all normal levels in such cases foggersmisters are used to reduce the temperature this does not increase the humidity levels in the poly house as the evaporated droplets are almost immediately ventilated to open air hightech poly houses even have spaceheating systems as well as soilheating systems to purify the soil of unwanted viruses bacteria and other organisms the recent indoisrael collaboration at gharunda near karnal is an excellent example of polyhouse farming taking place in a developing country if developing countries were to develop a special incentive program solely for fruitandvegetable farmers especially in demographically large nations like india then the migration rate from rural to urban areas as well as the loss of horticultural and fruitvegetable farmers to urban areas may be reduced this brings a huge potential to improve the farming sector which is key to longterm economic stability the small polytunnels used by each farmer in each village promote the cultivation of vegetables both onseason and offseason and would actually help to moderate the market rate for fruit and vegetables in long run on a yearround basis and would help to satisfy local market needs for example in india the inability to grow tomatoes generates price spikes during the monsoon season this is seen as an ideal time to grow tomatoes in polytunnels since they provide the ideal climate for the crop in india the abhinav farmers club grows flowers and organic vegetables in polytunnels hoophouses have existed at least since the 1940s but they are much more commonly used with each passing decade and their design continues to evolve because of the wide variety of constantly changing designs in reality there is an entirely continuous spectrum from high tunnels through low tunnels to the simplest row covers although they are often thought about as discrete steps major themes of continuing development are 1 achieving the same results with lighter construction and less cost and 2 making hoophouses easily movable the advantages of mobile hoophouses include greater return on investment with the same unit of investment getting greater use per year across different crops in different months and more flexibility on crop rotation without ever having to bother to dig the soil out of a stationary house or use soil steam sterilization to cure greenhouse soil sickness a us department of agriculture program is helping farmers install polytunnels the program was announced at the us white house garden in december 2009farmers in iraq are building these in increasing number and adding drip irrigation to grow tomatoes'</li></ul> |
| 18 | <ul><li>'the first postage stamps those of the united kingdom had no name in 1874 the universal postal union exempted the united kingdom from its rule which stated that a countrys name had to appear on their postage stamps so a profile of the reigning monarch was all that was required for identification of the uks stamps to this day the uk remains the only country not required to name itself on its stamps for all other upu members the name must appear in latin letters many countries using nonlatin alphabets used only those on their early stamps and they remain difficult for most collectors to identify today the name chosen is typically the countrys own name for itself with a modern trend towards using simpler and shorter forms or abbreviations for instance the republic of south africa inscribes with rsa while jordan originally used the hashemite kingdom of jordan and now just jordan some countries have multiple allowed forms from which the designer may choose the most suitable the name may appear in an adjectival form as in posta romana romanian post for romania dependent territories may or may not include the name of the parent country the graphic element of a stamp design falls into one of four major categories portrait bust profile or fullface emblem coat of arms flag national symbol posthorn etc numeric a design built around the numeral of value pictorialthe use of portrait busts of the ruler or other significant person or emblems was typical of the first stamps by extension from currency which was the closest model available to the early stamp designers usage pattern has varied considerably for 60 years from 1840 to 1900 all british stamps used exactly the same portrait bust of victoria enclosed in a dizzying variety of frames while spain periodically updated the image of alfonso xiii as he grew from child to adult norway has issued stamps with the same posthorn motif for over a century changing only the details from time to time as printing technology improves while the us has placed the flag of the united states into a wide variety of settings since first using it on a stamp in the 1950s while numeral designs are eminently practical in that they emphasize the most important element of the stamp they are the exception rather than the rule by far the greatest variety of stamp design seen today is in pictorial issues the choice of image is nearly unlimited ranging from plants and animals to figures from history to landscapes to original artwork images may represent realworld objects or be allegories or abstract designs the choice of pictorial designs is governed by a combination of anniversaries required annual issues such as christmas stamps postal rate changes exhaustion of existing stamp stocks and popular demand since postal administrations are either a branch'</li><li>'##ionism in both cases reflecting the influence of french impressionism which had spread internationally they are also known for their conceptual art as well as an internal split in the group which led to the formation of a new secession 1910 – 1914 key figures included walter leistikow franz skarbina max liebermann hermann struck and the norwegian painter edvard munch cologne 1909 – 1916 — also known as the sonderbund or the separate league of west german art lovers and artists the sonderbund westdeutscher kunstfreunde und kunstler was known for its landmark exhibitions introducing french impressionism postimpressionism and modernism to germany its 1912 show aimed to organize the most disputed paintings of our time and was later credited for helping develop a german version of expressionism while also presenting the most significant exhibition of european modernism prior to world war i the following year in fact it inspired a similar show in new york artists associated with the group included julius bretz max clarenbach august deusser walter ophey ernst osthaus egon schiele wilhelm schmurr alfred sohnrethel karli sohnrethel and otto sohnrethel along with collectors and curators of art dresden 1919 – 1925 — formed in reaction to the oppression of post world war i and the rise of the weimar republic otto schubert conrad felixmuller and otto dix are considered key figures in the dresden secession they are known for a highly accomplished form of german expressionism that was later labeled degenerate by the nazis selection was limited by availability academic art – style of painting and sculpture preraphaelite – group of english painters poets and critics founded in 1848pages displaying short descriptions of redirect targets salon des refuses art exhibition in paris first held in 1863 of works rejected by the academie des beauxarts simon hansulrich sezessionismus kunstgewerbe in literarischer und bildender kunst j b metzlersche verlagsbuchhandlung stuttgart 1976 isbn 3476002896'</li><li>'then still known as the vienna method was the monumental collection of 100 statistical charts gesellschaft und wirtschaft 1930 the first rule of isotype is that greater quantities are not represented by an enlarged pictogram but by a greater number of the samesized pictogram in neurath ’ s view variation in size does not allow accurate comparison what is to be compared – heightlength or area whereas repeated pictograms which always represent a fixed value within a certain chart can be counted if necessary isotype pictograms almost never depicted things in perspective in order to preserve this clarity and there were other guidelines for graphic configuration and use of colour the best exposition of isotype technique remains otto neurath ’ s book international picture language 1936 visual education was always the prime motive behind isotype which was worked out in exhibitions and books designed to inform ordinary citizens including schoolchildren about their place in the world it was never intended to replace verbal language it was a helping language always accompanied by verbal elements otto neurath realized that it could never be a fully developed language so instead he called it a “ languagelike technique ” as more requests came to the vienna museum from abroad a partner institute called mundaneum a name adopted from an abortive collaboration with paul otlet was established in 19312 to promote international work it formed branches containing small exhibitions in berlin the hague london and new york city members of the vienna team travelled periodically to the soviet union during the early 1930s in order to help set up the allunion institute of pictorial statistics of soviet construction and economy всесоюзныи институт изобразительнои статистики советского строительства и хозяиства commonly abbreviated to izostat изостат which produced statistical graphics about the five year plans among other things after the closure of the gesellschafts und wirtschaftsmuseum in 1934 neurath reidemeister and arntz fled to the netherlands where they set up the international foundation for visual education in the hague during the 1930s significant commissions were received from the us including a series of massproduced charts for the national tuberculosis association and otto neurath ’ s book modern man in the making 1939 a high point of isotype on which he reidemeister and arntz worked in close'</li></ul> |
| 5 | <ul><li>'giant stars and white and red dwarf stars could support a timeintegrated biota up to 1046 kgyears in the galaxy and 1057 kgyears in the universesuch astroecology considerations quantify the immense potentials of future life in space with commensurate biodiversity and possibly intelligence chemical analysis of carbonaceous chondrite meteorites show that they contain extractable bioavailable water organic carbon and essential phosphate nitrate and potassium nutrients the results allow assessing the soil fertilities of the parent asteroids and planets and the amounts of biomass that they can sustainlaboratory experiments showed that material from the murchison meteorite when ground into a fine powder and combined with earths water and air can provide the nutrients to support a variety of organisms including bacteria nocardia asteroides algae and plant cultures such as potato and asparagus the microorganisms used organics in the carbonaceous meteorites as the carbon source algae and plant cultures grew well also on mars meteorites because of their high bioavailable phosphate contents the martian materials achieved soil fertility ratings comparable to productive agricultural soils this offers some data relating to terraforming of marsterrestrial analogues of planetary materials are also used in such experiments for comparison and to test the effects of space conditions on microorganismsthe biomass that can be constructed from resources can be calculated by comparing the concentration of elements in the resource materials and in biomass equation 1 a given mass of resource materials mresource can support mbiomass x of biomass containing element x considering x as the limiting nutrient where cresource x is the concentration mass per unit mass of element x in the resource material and cbiomass x is its concentration in the biomass m b i o m a s s x m r e s o u r c e x c r e s o u r c e x c b i o m a s s x displaystyle mbiomassxmresourcexfrac cresourcexcbiomassx 1 assuming that 100000 kg biomass supports one human the asteroids may then sustain about 6e15 six million billion people equal to a million earths a million times the present population similar materials in the comets could support biomass and populations about one hundred times larger solar energy can sustain these populations for the predicted further five billion years of the sun these considerations yield a maximum timeintegrated biota of 3e30 kgyears in the solar system after the sun becomes a white dwarf star and other white dwarf stars can provide energy'</li><li>'astronomer and astrobiology pioneer gavriil adrianovich tikhov tikhov is considered to be the father of astrobotany research in the field has been conducted both with growing earth plants in space environments and searching for botanical life on other planets the first organisms in space were specially developed strains of seeds launched to 134 km 83 mi on 9 july 1946 on a us launched v2 rocket these samples were not recovered the first seeds launched into space and successfully recovered were maize seeds launched on 30 july 1946 which were soon followed by rye and cotton these early suborbital biological experiments were handled by harvard university and the naval research laboratory and were concerned with radiation exposure on living tissue in 1971 500 tree seeds loblolly pine sycamore sweetgum redwood and douglas fir were flown around the moon on apollo 14 these moon trees were planted and grown with controls back on earth where no changes were detected in 1982 the crew of the soviet salyut 7 space station conducted an experiment prepared by lithuanian scientists alfonsas merkys and others and grew some arabidopsis using fiton3 experimental microgreenhouse apparatus thus becoming the first plants to flower and produce seeds in space a skylab experiment studied the effects of gravity and light on rice plants the svet2 space greenhouse successfully achieved seed to seed plant growth in 1997 aboard space station mir bion 5 carried daucus carota and bion 7 carried maize aka corn plant research continued on the international space station biomass production system was used on the iss expedition 4 the vegetable production system veggie system was later used aboard iss plants tested in veggie before going into space included lettuce swiss chard radishes chinese cabbage and peas red romaine lettuce was grown in space on expedition 40 which were harvested when mature frozen and tested back on earth expedition 44 members became the first american astronauts to eat plants grown in space on 10 august 2015 when their crop of red romaine was harvested since 2003 russian cosmonauts have been eating half of their crop while the other half goes towards further research in 2012 a sunflower bloomed aboard the iss under the care of nasa astronaut donald pettit in january 2016 us astronauts announced that a zinnia had blossomed aboard the issin 2018 the veggie3 experiment was tested with plant pillows and root mats one of the goals is to grow food for crew consumption crops tested at this time include cabbage lettuce and mizuna plants that have been grown in space include arabidopsis thale cress bok choy tokyo bekana'</li><li>'the planet simulator also known as a planetary simulator is a climatecontrolled simulation chamber designed to study the origin of life the device was announced by researchers at mcmaster university on behalf of the origins institute on 4 october 2018 the simulator project begun in 2012 and was funded with 1 million from the canada foundation for innovation the ontario government and mcmaster university it was built and manufactured by angstrom engineering inc of kitchener ontariothe device was designed and developed by biophysicist maikel rheinstadter and coprincipal investigators biochemist yingfu li and astrophysicist ralph pudritz for researchers to study a theory that suggests life on early earth began in warm little ponds rather than in deep ocean vents nearly four billion years ago the device can recreate conditions of the primitive earth to see whether cellular life can be created and then later evolvein an 2018 news release maikel rheinstadter stated we want to understand how the first living cell was formed how the earth moved from a chemical world to a biological worldthe planet simulator can mimic the environmental conditions consistent on the early earth and other astronomical bodies including other planets and exoplanets by controlling temperature humidity pressure atmosphere and radiation levels within the simulation chamber according to researchers preliminary tests with the simulator under possible conditions of the early earth created protocells cells which are not living but very important nonetheless according to biologist david deamer the device is a game changer and the cells produced so far are significant the cells are not alive but are evolutionary steps toward a living system of molecules the simulator opens up a lot of experimental activities that were literally impossible before ” based on initial tests with the new simulator technology project director rheinstadter stated that it seems that the formation of life is probably a relatively frequent process in the universe'</li></ul> |
| 28 | <ul><li>'##nfjgk0 if k = 1 displaystyle kneq 1 and [UNK] j 1 n a j 1 [UNK] j 1 n f j e n displaystyle sum j1naj1sum j1nfjen let a ∗ displaystyle aast denote the conjugate transpose of a then a a ∗ a ∗ a n i displaystyle aaast aast ani this implies the desired orthogonality relationship for the characters ie [UNK] k 1 n f k ∗ g i f k g j n δ i j displaystyle sum k1nfkgifkgjndelta ij where δ i j displaystyle delta ij is the kronecker delta and f k ∗ g i displaystyle fkgi is the complex conjugate of f k g i displaystyle fkgi pontryagin duality'</li><li>'j x p i ν p i − 1 [UNK] j i 1 ω x p j ν p j x [UNK] i 1 ω x ν p i x p i x x [UNK] p prime p [UNK] x ν p x p displaystyle dxsum i1omega xleftnu pixleftprod j1i1pjnu pjxrightpinu pi1leftprod ji1omega xpjnu pjxrightrightsum i1omega xfrac nu pixpixxsum stackrel pmid xptext primefrac nu pxp where ωx a prime omega function is the number of distinct prime factors in x and νpx is the padic valuation of x for example d 60 d 2 2 ⋅ 3 ⋅ 5 2 2 1 3 1 5 ⋅ 60 92 displaystyle d60d22cdot 3cdot 5leftfrac 22frac 13frac 15rightcdot 6092 or d 81 d 3 4 4 ⋅ 3 3 ⋅ d 3 4 ⋅ 27 ⋅ 1 108 displaystyle d81d344cdot 33cdot d34cdot 27cdot 1108 the sequence of number derivatives for k 0 1 2 … begins sequence a003415 in the oeis 0 0 1 1 4 1 5 1 12 6 7 1 16 1 9 … displaystyle 00114151126711619ldots the logarithmic derivative ld x d x x [UNK] p prime p [UNK] x ν p x p displaystyle operatorname ld xfrac dxxsum stackrel pmid xptext primefrac nu pxp is a totally additive function ld x ⋅ y ld x ld y displaystyle operatorname ld xcdot yoperatorname ld xoperatorname ld y the arithmetic partial derivative of x displaystyle x with respect to p displaystyle p is defined as x p ′ ν p x p x displaystyle xpprime frac nu pxpx so the arithmetic derivative of x displaystyle x is given as d x [UNK] p prime p [UNK] x x p ′ displaystyle dxsum stackrel pmid xptext primexpprime an arithmetic function f displaystyle f is leibnizadditive if there is a totally multiplicative function h f displaystyle hf such that f m n f m h f n f n h f m displaystyle fmnfmhfnfnhfm for all positive integers m displaystyle m and n displaystyle n a motivation for this concept is'</li><li>'and every rcoloring of the integers greater than one there is a finite monochromatic subset s of these integers such that the conjecture was proven in 2003 by ernest s croot iii znams problem and primary pseudoperfect numbers are closely related to the existence of egyptian fractions of the form for instance the primary pseudoperfect number 1806 is the product of the prime numbers 2 3 7 and 43 and gives rise to the egyptian fraction 1 12 13 17 143 11806 egyptian fractions are normally defined as requiring all denominators to be distinct but this requirement can be relaxed to allow repeated denominators however this relaxed form of egyptian fractions does not allow for any number to be represented using fewer fractions as any expansion with repeated fractions can be converted to an egyptian fraction of equal or smaller length by repeated application of the replacement if k is odd or simply by replacing 1k 1k by 2k if k is even this result was first proven by takenouchi 1921 graham and jewett proved that it is similarly possible to convert expansions with repeated denominators to longer egyptian fractions via the replacement this method can lead to long expansions with large denominators such as botts 1967 had originally used this replacement technique to show that any rational number has egyptian fraction representations with arbitrarily large minimum denominators any fraction xy has an egyptian fraction representation in which the maximum denominator is bounded by and a representation with at most terms the number of terms must sometimes be at least proportional to log log y for instance this is true for the fractions in the sequence 12 23 67 4243 18061807 whose denominators form sylvesters sequence it has been conjectured that olog log y terms are always enough it is also possible to find representations in which both the maximum denominator and the number of terms are small graham 1964 characterized the numbers that can be represented by egyptian fractions in which all denominators are nth powers in particular a rational number q can be represented as an egyptian fraction with square denominators if and only if q lies in one of the two halfopen intervals martin 1999 showed that any rational number has very dense expansions using a constant fraction of the denominators up to n for any sufficiently large n engel expansion sometimes called an egyptian product is a form of egyptian fraction expansion in which each denominator is a multiple of the previous one in addition the sequence of multipliers ai is required to be nondecreasi'</li></ul> |
| 38 | <ul><li>'##ken the global language system theorises that language groups are engaged in unequal competition on different levels globally using the notions of a periphery semiperiphery and a core which are concepts of the world system theory de swaan relates them to the four levels present in the hierarchy of the global language system peripheral central supercentral and hypercentralde swaan also argues that the greater the range of potential uses and users of a language the higher the tendency of an individual to move up the hierarchy in the global language system and learn a more central language thus de swaan views the learning of second languages as proceeding up rather than down the hierarchy in the sense that they learn a language that is on the next level up for instance speakers of catalan a peripheral language have to learn spanish a central language to function in their own society spain meanwhile speakers of persian a central language have to learn arabic a supercentral language to function in their region on the other hand speakers of a supercentral language have to learn the hypercentral language to function globally as is evident from the huge number of nonnative english speakersaccording to de swaan languages exist in constellations and the global language system comprises a sociological classification of languages based on their social role for their speakers the worlds languages and multilinguals are connected in a strongly ordered hierarchical pattern there are thousands of peripheral or minority languages in the world each of which are connected to one of a hundred central languages the connections and patterns between each language is what makes up the global language system the four levels of language are the peripheral central supercentral and hypercentral languages peripheral languages at the lowest level peripheral languages or minority languages form the majority of languages spoken in the world 98 of the worlds languages are peripheral languages and spoken by less than 10 of the world ’ s population unlike central languages these are languages of conversation and narration rather than reading and writing of memory and remembrance rather than record they are used by native speakers within a particular area and are in danger of becoming extinct with increasing globalisation which sees more and more speakers of peripheral languages acquiring more central languages in order to communicate with others central languages the next level constitutes about 100 central languages spoken by 95 of the worlds population and generally used in education media and administration typically they are the national and official languages of the ruling state these are the languages of record and much of what has been said and written in those languages is saved in newspaper reports minutes and proceedings stored in archives included in history books collections of the classics of folk talks and folk ways increasingly recorded on electronic media and'</li><li>'the common misconception that aave carries ungrammatical features or that any speaker who speaks aave are uneducated or sloppy however like all dialects aave shows consistent internal logic and grammatical complexity as explained in the following examplesthe use of done coupled with the past tense of the verb in a sentence as seen in they done used all the good ones is a persistent structural trait of aave that is shared with southern european american vernacular varieties of english although the verbal particle done also occurs in caribbean creoles its syntactic configuration and semanticpragmatic function in aave differ somewhat from its creole counterpartsin aave done occurs only in preverbal auxiliary position with past tense forms whereas it occurs with a bare verb stem eg they done go and can occur in clausefinal position in some creoles in many aspects it functions in aave like a perfect tense referring to an action completed in the recent past but it can also be used to highlight the change of state or to intensify an activity as in the sentence i done told you not to mess up it is a stable feature but it is more frequently used in southern rural versions of aave than in urban aavedouble negation is also another feature commonly found in aave referring to the marking of negation on the auxiliary verb and indefinite pronoun an example would be she aint tellin nobody which would be she isnt telling anybody in standard english another feature copula absence or the absence of is or are in certain contexts can be observed as well he workin or they going home are some examples the habitual aspect marker or the invariant be habitual be as seen in he be workin they be tryin or i be like is a typical feature of aave it is the use of the base form of the copula verb be instead of the inflected forms such as are and am this is probably the most salient grammatical trait of aave both within the community and outside of it to the point of it being a stereotype prominently figured in representations of aave especially in the mediathe link between language and identity can be stretched into a tripartite where culture becomes key the addition of culture to the way language is linked to identity blur the lines because culture can be considered an abstract concept particularly in america it is nearly impossible to pinpoint a common culture in a country filled with so many different cultures especially when many of them are several generations removed from their origins because of the racial makeup of the country it is not ideal to include all american citizens under a'</li><li>'patois pl same or is speech or language that is considered nonstandard although the term is not formally defined in linguistics as such patois can refer to pidgins creoles dialects or vernaculars but not commonly to jargon or slang which are vocabularybased forms of cant in colloquial usage of the term especially in france class distinctions are implied by the very meaning of the term since in french patois refers to any sociolect associated with uneducated rural classes in contrast with the dominant prestige language standard french spoken by the middle and high classes of cities or as used in literature and formal settings the acrolect the term patois comes from old french patois local or regional dialect originally meaning rough clumsy or uncultivated speech possibly from the verb patoier to treat roughly from patte paw from old low franconian patta paw sole of the foot plus the suffix ois in france and other francophone countries patois has been used to describe nonstandard french and regional languages such as picard occitan and francoprovencal since 1643 and catalan after 1700 when the king louis xiv banned its use the word assumes the view of such languages being backward countrified and unlettered thus patois being potentially considered offensive when used by outsiders jean jaures said one names patois the language of a defeated nation in france and switzerland however the term patois no longer holds any offensive connotation and has indeed become a celebrated and distinguished variant of the numerous local tonguesthe vernacular form of english spoken in jamaica is also referred to as patois or patwa it is noted especially in reference to jamaican patois from 1934 jamaican patois language comprises words of the native languages of the many ethnic and cultural groups within the caribbean including spanish portuguese chinese amerindian and english along with several african languages some islands have creole dialects influenced by their linguistic diversity french spanish arabic hebrew german dutch italian chinese vietnamese and others jamaican patois is also spoken in costa rica and french creole is spoken in caribbean countries such as trinidad and tobago and guyana in south america often these patois are popularly considered broken english or slang but cases such as jamaican patois are classified with more correctness as a creole language in fact in the francophone caribbean the analogous term for local basilectal languages is creole see also jamaican english and jamaican creole antillean creole spoken in several present or formerly french islands of the lesser antilles includes vocabulary and grammar of african and carib origin in addition to french its dialects often contain folketymological derivatives of french words for example la'</li></ul> |
| 40 | <ul><li>'##2 is the invariant of rohlin1991 clifford taubes forselfdual yangmills connections on nonselfdual 4manifolds journal of differential geometry 17 1982 no 1 139 – 170 gauge theory on asymptotically periodic 4manifolds j differential geom 25 1987 no 3 363 – 430 cassons invariant and gauge theory j differential geom 31 1990 no 2 547 – 5991996 richard s hamilton forthe formation of singularities in the ricci flow surveys in differential geometry vol ii cambridge ma 1993 7 – 136 int press cambridge ma 1995 fourmanifolds with positive isotropic curvature comm anal geom 5 1997 no 1 1 – 921996 gang tian foron calabis conjecture for complex surfaces with positive first chern class invent math 101 1990 no 1 101 – 172 compactness theorems for kahlereinstein manifolds of dimension 3 and up j differential geom 35 1992 no 3 535 – 558 a mathematical theory of quantum cohomology j differential geom 42 1995 no 2 259 – 367 with yongbin ruan kahlereinstein metrics with positive scalar curvature invent math 130 1997 no 1 1 – 372001 jeff cheeger forfamilies index for manifolds with boundary superconnections and cones i families of manifolds with boundary and dirac operators j funct anal 89 1990 no 2 313 – 363 with jeanmichel bismut families index for manifolds with boundary superconnections and cones ii the chern character j funct anal 90 1990 no 2 306 – 354 with jeanmichel bismut lower bounds on ricci curvature and the almost rigidity of warped products ann of math 2 144 1996 no 1 189 – 237 with tobias colding on the structure of spaces with ricci curvature bounded below i j differential geom 46 1997 no 3 406 – 480 with tobias colding2001 yakov eliashberg forcombinatorial methods in symplectic geometry proceedings of the international congress of mathematicians vol 1 2 berkeley calif 1986 531 – 539 amer math soc providence ri 1987 classification of overtwisted contact structures on 3manifolds invent math 98 1989 no 3 623 – 6372001 michael j hopkins fornilpotence and stable homotopy theory i ann of math 2 128 1988 no 2 207 – 241 with ethan devinatz and jeffrey smith the rigid analytic period mapping lubintate space and stable homotopy theory bull amer math'</li><li>'this case the two metric spaces are essentially identical they are called quasiisometric if there is a quasiisometry between them a normed vector space is a vector space equipped with a norm which is a function that measures the length of vectors the norm of a vector v is typically denoted by ‖ v ‖ displaystyle lvert vrvert any normed vector space can be equipped with a metric in which the distance between two vectors x and y is given by the metric d is said to be induced by the norm ‖ ⋅ ‖ displaystyle lvert cdot rvert conversely if a metric d on a vector space x is translation invariant d x y d x a y a displaystyle dxydxaya for every x y and a in x and absolutely homogeneous d α x α y α d x y displaystyle dalpha xalpha yalpha dxy for every x and y in x and real number αthen it is the metric induced by the norm a similar relationship holds between seminorms and pseudometrics among examples of metrics induced by a norm are the metrics d1 d2 and d∞ on r 2 displaystyle mathbb r 2 which are induced by the manhattan norm the euclidean norm and the maximum norm respectively more generally the kuratowski embedding allows one to see any metric space as a subspace of a normed vector space infinitedimensional normed vector spaces particularly spaces of functions are studied in functional analysis completeness is particularly important in this context a complete normed vector space is known as a banach space an unusual property of normed vector spaces is that linear transformations between them are continuous if and only if they are lipschitz such transformations are known as bounded operators a curve in a metric space m d is a continuous function γ 0 t → m displaystyle gamma 0tto m the length of γ is measured by in general this supremum may be infinite a curve of finite length is called rectifiable suppose that the length of the curve γ is equal to the distance between its endpoints — that is its the shortest possible path between its endpoints after reparametrization by arc length γ becomes a geodesic a curve which is a distancepreserving function a geodesic is a shortest possible path between any two of its pointsa geodesic metric space is a metric space which admits a geodesic between any two of its points the spaces r 2 d 1 displaystyle mathbb r 2d1 and r 2 d 2 displaystyle mathbb r 2d2 are both geo'</li><li>'symmetryprotected topological spt order is a kind of order in zerotemperature quantummechanical states of matter that have a symmetry and a finite energy gap to derive the results in a mostinvariant way renormalization group methods are used leading to equivalence classes corresponding to certain fixed points the spt order has the following defining properties a distinct spt states with a given symmetry cannot be smoothly deformed into each other without a phase transition if the deformation preserves the symmetry b however they all can be smoothly deformed into the same trivial product state without a phase transition if the symmetry is broken during the deformation the above definition works for both bosonic systems and fermionic systems which leads to the notions of bosonic spt order and fermionic spt order using the notion of quantum entanglement we can say that spt states are shortrange entangled states with a symmetry by contrast for longrange entanglement see topological order which is not related to the famous epr paradox since shortrange entangled states have only trivial topological orders we may also refer the spt order as symmetry protected trivial order the boundary effective theory of a nontrivial spt state always has pure gauge anomaly or mixed gaugegravity anomaly for the symmetry group as a result the boundary of a spt state is either gapless or degenerate regardless how we cut the sample to form the boundary a gapped nondegenerate boundary is impossible for a nontrivial spt state if the boundary is a gapped degenerate state the degeneracy may be caused by spontaneous symmetry breaking andor intrinsic topological order monodromy defects in nontrivial 21d spt states carry nontrival statistics and fractional quantum numbers of the symmetry group monodromy defects are created by twisting the boundary condition along a cut by a symmetry transformation the ends of such cut are the monodromy defects for example 21d bosonic zn spt states are classified by a zn integer m one can show that n identical elementary monodromy defects in a zn spt state labeled by m will carry a total zn quantum number 2m which is not a multiple of n 21d bosonic u1 spt states have a hall conductance that is quantized as an even integer 21d bosonic so3 spt states have a quantized spin hall conductance spt states are shortrange entangled while topologically ordered states are longrange entangled both intrinsic topological order and also sp'</li></ul> |
| 4 | <ul><li>'hormone auxin which activates meristem growth alongside other mechanisms to control the relative angle of buds around the stem from a biological perspective arranging leaves as far apart as possible in any given space is favoured by natural selection as it maximises access to resources especially sunlight for photosynthesis in mathematics a dynamical system is chaotic if it is highly sensitive to initial conditions the socalled butterfly effect which requires the mathematical properties of topological mixing and dense periodic orbitsalongside fractals chaos theory ranks as an essentially universal influence on patterns in nature there is a relationship between chaos and fractals — the strange attractors in chaotic systems have a fractal dimension some cellular automata simple sets of mathematical rules that generate patterns have chaotic behaviour notably stephen wolframs rule 30vortex streets are zigzagging patterns of whirling vortices created by the unsteady separation of flow of a fluid most often air or water over obstructing objects smooth laminar flow starts to break up when the size of the obstruction or the velocity of the flow become large enough compared to the viscosity of the fluid meanders are sinuous bends in rivers or other channels which form as a fluid most often water flows around bends as soon as the path is slightly curved the size and curvature of each loop increases as helical flow drags material like sand and gravel across the river to the inside of the bend the outside of the loop is left clean and unprotected so erosion accelerates further increasing the meandering in a powerful positive feedback loop waves are disturbances that carry energy as they move mechanical waves propagate through a medium – air or water making it oscillate as they pass by wind waves are sea surface waves that create the characteristic chaotic pattern of any large body of water though their statistical behaviour can be predicted with wind wave models as waves in water or wind pass over sand they create patterns of ripples when winds blow over large bodies of sand they create dunes sometimes in extensive dune fields as in the taklamakan desert dunes may form a range of patterns including crescents very long straight lines stars domes parabolas and longitudinal or seif sword shapesbarchans or crescent dunes are produced by wind acting on desert sand the two horns of the crescent and the slip face point downwind sand blows over the upwind face which stands at about 15 degrees from the horizontal and falls onto the slip face where it accumulates up to the angle of repose of the sand which is about 35 degrees when the slip face'</li><li>'singleparticle trajectories spts consist of a collection of successive discrete points causal in time these trajectories are acquired from images in experimental data in the context of cell biology the trajectories are obtained by the transient activation by a laser of small dyes attached to a moving molecule molecules can now by visualized based on recent superresolution microscopy which allow routine collections of thousands of short and long trajectories these trajectories explore part of a cell either on the membrane or in 3 dimensions and their paths are critically influenced by the local crowded organization and molecular interaction inside the cell as emphasized in various cell types such as neuronal cells astrocytes immune cells and many others spt allowed observing moving particles these trajectories are used to investigate cytoplasm or membrane organization but also the cell nucleus dynamics remodeler dynamics or mrna production due to the constant improvement of the instrumentation the spatial resolution is continuously decreasing reaching now values of approximately 20 nm while the acquisition time step is usually in the range of 10 to 50 ms to capture short events occurring in live tissues a variant of superresolution microscopy called sptpalm is used to detect the local and dynamically changing organization of molecules in cells or events of dna binding by transcription factors in mammalian nucleus superresolution image acquisition and particle tracking are crucial to guarantee a high quality data once points are acquired the next step is to reconstruct a trajectory this step is done known tracking algorithms to connect the acquired points tracking algorithms are based on a physical model of trajectories perturbed by an additive random noise the redundancy of many short spts is a key feature to extract biophysical information parameters from empirical data at a molecular level in contrast long isolated trajectories have been used to extract information along trajectories destroying the natural spatial heterogeneity associated to the various positions the main statistical tool is to compute the meansquare displacement msd or second order statistical moment ⟨ x t δ t − x t 2 ⟩ [UNK] t α displaystyle langle xtdelta txt2rangle sim talpha average over realizations where α displaystyle alpha is the called the anomalous exponentfor a brownian motion ⟨ x t δ t − x t 2 ⟩ 2 n d t displaystyle langle xtdelta txt2rangle 2ndt where d is the diffusion coefficient n is dimension of the space some other properties can also be recovered from long trajectories such as the'</li><li>'each n displaystyle n the new function is defined at the points a a h a 2 h … a n h … displaystyle aaha2hldots anhldots the fundamental theorem of calculus states that differentiation and integration are inverse operations more precisely it relates the difference quotients to the riemann sums it can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration the fundamental theorem of calculus if a function f displaystyle f is defined on a partition of the interval a b displaystyle ab b a n h displaystyle banh and if f displaystyle f is a function whose difference quotient is f displaystyle f then we have [UNK] i 0 n − 1 f a i h h 2 δ x f b − f a displaystyle sum i0n1faihh2delta xfbfa furthermore for every m 0 1 2 … n − 1 textstyle m012ldots n1 we have δ δ x [UNK] i 0 m f a i h h 2 δ x f a m h h 2 displaystyle frac delta delta xsum i0mfaihh2delta xfamhh2 this is also a prototype solution of a difference equation difference equations relate an unknown function to its difference or difference quotient and are ubiquitous in the sciences the early history of discrete calculus is the history of calculus such basic ideas as the difference quotients and the riemann sums appear implicitly or explicitly in definitions and proofs after the limit is taken however they are never to be seen again however the kirchhoffs voltage law 1847 can be expressed in terms of the onedimensional discrete exterior derivative during the 20th century discrete calculus remains interlinked with infinitesimal calculus especially differential forms but also starts to draw from algebraic topology as both develop the main contributions come from the following individuals henri poincare triangulations barycentric subdivision dual triangulation poincare lemma the first proof of the general stokes theorem and a lot more l e j brouwer simplicial approximation theorem elie cartan georges de rham the notion of differential form the exterior derivative as a coordinateindependent linear operator exactnessclosedness of forms emmy noether heinz hopf leopold vietoris walther mayer modules of chains the boundary operator chain complexes j w alexander solomon lefschetz lev pontryagin andrey kolmogorov norman steenrod eduard cech the early cochain notions hermann weyl the kirchhoff laws'</li></ul> |
| 6 | <ul><li>'##ativistic degenerate matter a polytrope with index n 3 is a good model for the cores of white dwarfs of higher masses according to the equation of state of relativistic degenerate matter a polytrope with index n 3 is usually also used to model mainsequence stars like the sun at least in the radiation zone corresponding to the eddington standard model of stellar structure a polytrope with index n 5 has an infinite radius it corresponds to the simplest plausible model of a selfconsistent stellar system first studied by arthur schuster in 1883 and it has an exact solution a polytrope with index n ∞ corresponds to what is called an isothermal sphere that is an isothermal selfgravitating sphere of gas whose structure is identical to the structure of a collisionless system of stars like a globular cluster this is because for an ideal gas the temperature is proportional to ρ1n so infinite n corresponds to a constant temperaturein general as the polytropic index increases the density distribution is more heavily weighted toward the center r 0 of the body polytropic process equation of state murnaghan equation of state'</li><li>'together the analysis was expanded upon by alar toomre in 1964 and presented in a more general and comprehensive framework'</li><li>'the bidirectional reflectance distribution function brdf symbol f r ω i ω r displaystyle ftextromega textiomega textr is a function of four real variables that defines how light is reflected at an opaque surface it is employed in the optics of realworld light in computer graphics algorithms and in computer vision algorithms the function takes an incoming light direction ω i displaystyle omega texti and outgoing direction ω r displaystyle omega textr taken in a coordinate system where the surface normal n displaystyle mathbf n lies along the zaxis and returns the ratio of reflected radiance exiting along ω r displaystyle omega textr to the irradiance incident on the surface from direction ω i displaystyle omega texti each direction ω displaystyle omega is itself parameterized by azimuth angle [UNK] displaystyle phi and zenith angle θ displaystyle theta therefore the brdf as a whole is a function of 4 variables the brdf has units sr−1 with steradians sr being a unit of solid angle the brdf was first defined by fred nicodemus around 1965 the definition is where l displaystyle l is radiance or power per unit solidangleinthedirectionofaray per unit projectedareaperpendiculartotheray e displaystyle e is irradiance or power per unit surface area and θ i displaystyle theta texti is the angle between ω i displaystyle omega texti and the surface normal n displaystyle mathbf n the index i displaystyle texti indicates incident light whereas the index r displaystyle textr indicates reflected light the reason the function is defined as a quotient of two differentials and not directly as a quotient between the undifferentiated quantities is because irradiating light other than d e i ω i displaystyle mathrm d etextiomega texti which are of no interest for f r ω i ω r displaystyle ftextromega textiomega textr might illuminate the surface which would unintentionally affect l r ω r displaystyle ltextromega textr whereas d l r ω r displaystyle mathrm d ltextromega textr is only affected by d e i ω i displaystyle mathrm d etextiomega texti the spatially varying bidirectional reflectance distribution function svbrdf is a 6dimensional function f r ω i ω r x displaystyle ftextromega textiomega textrmathbf x where x displaystyle mathbf x describes a 2d'</li></ul> |
| 35 | <ul><li>'microbiologically induced calcium carbonate precipitation micp is a biogeochemical process that induces calcium carbonate precipitation within the soil matrix biomineralization in the form of calcium carbonate precipitation can be traced back to the precambrian period calcium carbonate can be precipitated in three polymorphic forms which in the order of their usual stabilities are calcite aragonite and vaterite the main groups of microorganisms that can induce the carbonate precipitation are photosynthetic microorganisms such as cyanobacteria and microalgae sulfatereducing bacteria and some species of microorganisms involved in nitrogen cycle several mechanisms have been identified by which bacteria can induce the calcium carbonate precipitation including urea hydrolysis denitrification sulfate production and iron reduction two different pathways or autotrophic and heterotrophic pathways through which calcium carbonate is produced have been identified there are three autotrophic pathways which all result in depletion of carbon dioxide and favouring calcium carbonate precipitation in heterotrophic pathway two metabolic cycles can be involved the nitrogen cycle and the sulfur cycle several applications of this process have been proposed such as remediation of cracks and corrosion prevention in concrete biogrout sequestration of radionuclides and heavy metals all three principal kinds of bacteria that are involved in autotrophic production of carbonate obtain carbon from gaseous or dissolved carbon dioxide these pathways include nonmethylotrophic methanogenesis anoxygenic photosynthesis and oxygenic photosynthesis nonmethylotrophic methanogenesis is carried out by methanogenic archaebacteria which use co2 and h2 in anaerobiosis to give ch4 two separate and often concurrent heterotrophic pathways that lead to calcium carbonate precipitation may occur including active and passive carbonatogenesis during active carbonatogenesis the carbonate particles are produced by ionic exchanges through the cell membrane by activation of calcium andor magnesium ionic pumps or channels probably coupled with carbonate ion production during passive carbonatogenesis two metabolic cycles can be involved the nitrogen cycle and the sulfur cycle three different pathways can be involved in the nitrogen cycle ammonification of amino acids dissimilatory reduction of nitrate and degradation of urea or uric acid in the sulfur cycle bacteria follow the dissimilatory reduction of sulfate ureolysis or degradation of urea the microbial urease catalyzes the hydrolysis of urea into ammonium and carbonate one mole of urea is hydrolyzed intracellular'</li><li>'brown earth is a type of soil brown earths are mostly located between 35° and 55° north of the equator the largest expanses cover western and central europe large areas of western and transuralian russia the east coast of america and eastern asia here areas of brown earth soil types are found particularly in japan korea china eastern australia and new zealand brown earths cover 45 of the land in england and wales they are common in lowland areas below 1000 feet on permeable parent material the most common vegetation types are deciduous woodland and grassland due to the reasonable natural fertility of brown earths large tracts of deciduous woodland have been cut down and the land is now used for farming they are normally located in regions with a humid temperate climate rainfall totals are moderate usually below 76 cm per year and temperatures range from 4 °c in the winter to 18 °c in the summer they are welldrained fertile soils with a ph of between 50 and 65 soils generally have three horizons the a b and c horizon horizon a is usually a brownish colour and over 20 cm in depth it is composed of mull humus well decomposed alkaline organic matter and mineral matter it is biologically active with many soil organisms and plant roots mixing the mull humus with mineral particles as a result the boundary between the a and b horizons can be illdefined in unploughed examples horizon b is mostly composed of mineral matter which has been weathered from the parent material but it often contains inclusions of more organic material carried in by organisms especially earthworms it is lighter in colour than the a horizon and is often weakly illuviated enriched with material from overlying horizons due to limited leaching only the more soluble bases are moved down through the profile horizon c is made up of the parent material which is generally permeable and non or slightly acidic for example clay loam brown earths are important because they are permeable and usually easy to work throughout the year so they are valued for agriculture they also support a much wider range of forest trees than can be found on wetter land they are freely drained soils with welldeveloped a and b horizons they often develop over relatively permeable bedrock of some kind but are also found over unconsolidated parent materials like river gravels some soil classifications include welldrained alluvial soils in the brown earths too typically the brown earths have dark brown topsoils with loamy particle sizeclasses and good structure – especially under grassland the b horizon lacks the grey colours and mottles characteristic of gley'</li><li>'and it is about twice the carbon content of the atmosphere or around four times larger than the human emissions of carbon between the start of the industrial revolution and 2011 further most of this carbon 1035 billion tons is stored in what is defined as the nearsurface permafrost no deeper than 3 metres 98 ft below the surface however only a fraction of this stored carbon is expected to enter the atmosphere in general the volume of permafrost in the upper 3 m of ground is expected to decrease by about 25 per 1 °c 18 °f of global warming 1283 yet even under the rcp85 scenario associated with over 4 °c 72 °f of global warming by the end of the 21st century about 5 to 15 of permafrost carbon is expected to be lost over decades and centuriesthe exact amount of carbon that will be released due to warming in a given permafrost area depends on depth of thaw carbon content within the thawed soil physical changes to the environment and microbial and vegetation activity in the soil notably estimates of carbon release alone do not fully represent the impact of permafrost thaw on climate change this is because carbon can be released through either aerobic or anaerobic respiration which results in carbon dioxide co2 or methane ch4 emissions respectively while methane lasts less than 12 years in the atmosphere its global warming potential is around 80 times larger than that of co2 over a 20year period and about 28 times larger over a 100year period while only a small fraction of permafrost carbon will enter the atmosphere as methane those emissions will cause 4070 of the total warming caused by permafrost thaw during the 21st century much of the uncertainty about the eventual extent of permafrost methane emissions is caused by the difficulty of accounting for the recently discovered abrupt thaw processes which often increase the fraction of methane emitted over carbon dioxide in comparison to the usual gradual thaw processes another factor which complicates projections of permafrost carbon emissions is the ongoing greening of the arctic as climate change warms the air and the soil the region becomes more hospitable to plants including larger shrubs and trees which could not survive there before thus the arctic is losing more and more of its tundra biomes yet it gains more plants which proceed to absorb more carbon some of the emissions caused by permafrost thaw will be offset by this increased plant growth but the exact proportion is uncertain it is considered very unlikely that this greening could offset all of the emissions from permafrost thaw during the'</li></ul> |
| 8 | <ul><li>'the enhanced avionics system or easy is an integrated modular avionics suite and cockpit display system used on dassault falcon business jets since falcon 900ex and later used in other newer falcon aircraft such as falcon 2000ex and falcon 7xeasy has been jointly developed by dassault and honeywell and is based on honeywell primus epic dassault aviation started to develop the easy flight deck concept in the mid1990s with a goal to have a much better integration of aircraft systems such as fmseasy was first integrated and certificated on falcon 900ex the first easy equipped 900ex was delivered in december 2003 honeywell primus epic base of easy was then integrated on other business jets and helicopterseasy was certified on the falcon 2000ex in june 2004 with deliveries starting shortly after falcon 7x was developed from the groundup with easy avionics in october 2008 dassault announced the launch of easy phase ii program at the annual nbaa meeting in orlando easy phase ii include several enhancements to easy such as synthetic vision system adsb out paperless charts future air navigation system fans1a using controller pilot data link communications cpdlc localizer performance with vertical guidance lpveasy phase ii was certified on falcon 900lx in june 2011 and on falcon 7x in may 2013 easy architecture is based on integrated modular avionics the processing modules are called mau modular avionics units the core operating system of easy is provided by ddci integrated modular avionics ima cockpit display system dassault falcon 7x dassault aviation'</li><li>'briefly before being replaced by sonne and bernard erika transmitted a vhf signal on 3033 mhz which could be received by standard ebl 3 receivers the signal was adjusted in phase between a ref point and a navigation point after processing the fug 121 displayed an angle from the beacon by using two beacons it was possible to achieve a fix however this was a problem as four receivers were required two listening to each station on smaller aircraft there was not enough space and german industry was by now having trouble supplying enough radios to the air force without adding 4 more receivers per plane the system was not deployed some sources indicate that there may have been a version called electra that operated at 250 to 300 khz but details are lacking or contradictorysonne this system transmitted on 270 – 480 khz and could be received on a fug 10 no special receiver was required as the pattern was discernable with the ear all that was required was the special charts at least 6 stations were built providing coverage from the bay of biscay to norway accuracy was reasonable during the day but errors up to 4 degrees occurred at night the allies captured the maps with resulted in the being issued to allied units because of this the allies left the sonne system alone after the war the stations were rebuilt and operated into the 1970s the system was called consol by that time mond development work was done on sonne sun to remove the night time errors this system was called mond moon work was never completed truhe this system was based on the british gee system after british units were captured the germans set up a project to clone the units the first unit was the fug 122 which allowed the reception of british gee signals units in france received these units and were able to navigate using british signals the germans then developed the concept to produce fug 123 receivers which would allow a wider turning range this allowed the germans to setup gee chains of their own further inside germany where the british gee signals were unusable there seems to have been some idea of using frequencies very close to the british frequencies to make jamming by the allies hard to do without jamming their own gee system one chain became operational around berlin fubl 1 used the lorenz landing beam system consisted of the ebl 1 and ebl 2 receivers with display device anf 2 the ebl 1 operated between 30 and 33 mhz and received the azimuth signals from a transmitter at the far end of the runway the ebl 2 operated at 38 mhz and received the two marker beacons as the aircraft approached the threshold to land the afn 2 provided the pilot with'</li><li>'a ground proximity warning system gpws is a system designed to alert pilots if their aircraft is in immediate danger of flying into the ground or an obstacle the united states federal aviation administration faa defines gpws as a type of terrain awareness and warning system taws more advanced systems introduced in 1996 are known as enhanced ground proximity warning systems egpws a modern type of taws in the late 1960s a series of controlled flight into terrain cfit accidents took the lives of hundreds of people a cfit accident is one where a properly functioning airplane under the control of a fully qualified and certified crew is flown into terrain water or obstacles with no apparent awareness on the part of the crewbeginning in the early 1970s a number of studies examined the occurrence of cfit accidents findings from these studies indicated that many such accidents could have been avoided if a warning device called a ground proximity warning system gpws had been used as a result of these studies and recommendations from the us national transportation safety board ntsb in 1974 the faa required all large turbine and turbojet airplanes to install tsoapproved gpws equipmentthe un international civil aviation organization icao recommended the installation of gpws in 1979c donald bateman a canadianborn engineer developed and is credited with the invention of gpwsin march 2000 the us faa amended operating rules to require that all us registered turbinepowered airplanes with six or more passenger seats exclusive of pilot and copilot seating be equipped with an faaapproved taws the mandate affects aircraft manufactured after march 29 2002 prior to the development of gpws large passenger aircraft were involved in 35 fatal cfit accidents per year falling to 2 per year in the mid1970s a 2006 report stated that from 1974 when the us faa made it a requirement for large aircraft to carry such equipment until the time of the report there had not been a single passenger fatality in a cfit crash by a large jet in us airspaceafter 1974 there were still some cfit accidents that gpws was unable to help prevent due to the blind spot of those early gpws systems more advanced systems were developed older taws or deactivation of the egpws or ignoring its warnings when an airport is not in its database still leave aircraft vulnerable to possible cfit incidents in april 2010 a polish air force tupolev tu154m aircraft crashed near smolensk russia in a possible cfit accident killing all passengers and crew including the president of poland lech kaczynski the aircraft was equipped with taws made by universal avionics systems of tucson according to the russian interstate aviation committee'</li></ul> |
| 12 | <ul><li>'of s m displaystyle sm for some integers m displaystyle m whose base k displaystyle k representations are close to that of n displaystyle n constantrecursive sequences can be thought of as 1 displaystyle 1 regular sequences where the base1 representation of n displaystyle n consists of n displaystyle n copies of the digit 1 displaystyle 1'</li><li>'the small triangles whose vertices all have different numbers are shaded in the graph each small triangle becomes a node in the new graph derived from the triangulation the small letters identify the areas eight inside the figure and area i designates the space outside of it as described previously those nodes that share an edge whose endpoints are numbered 1 and 2 are joined in the derived graph for example node d shares an edge with the outer area i and its vertices all have different numbers so it is also shaded node b is not shaded because two vertices have the same number but it is joined to the outer area one could add a new fullnumbered triangle say by inserting a node numbered 3 into the edge between 1 and 1 of node a and joining that node to the other vertex of a doing so would have to create a pair of new nodes like the situation with nodes f and g suppose there is a ddimensional simplex of sidelength n and it is triangulated into subsimplices of sidelength 1 there is a function that given any vertex of the triangulation returns its color the coloring is guaranteed to satisfy sperners boundary condition how many times do we have to call the function in order to find a rainbow simplex obviously we can go over all the triangulation vertices whose number is ond which is polynomial in n when the dimension is fixed but can it be done in time this problem was first studied by christos papadimitriou he introduced a complexity class called ppad which contains this as well as related problems such as finding a brouwer fixed point he proved that finding a sperner simplex is ppadcomplete even for d3 some 15 years later chen and deng proved ppadcompleteness even for d2 it is believed that ppadhard problems cannot be solved in time opolylog n suppose that each vertex of the triangulation may be labeled with multiple colors so that the coloring function is f s → 2n1 for every subsimplex the set of labelings on its vertices is a setfamily over the set of colors n 1 this setfamily can be seen as a hypergraph if for every vertex v on a face of the simplex the colors in fv are a subset of the set of colors on the face endpoints then there exists a subsimplex with a balanced labeling – a labeling in which the corresponding hypergraph admits a perfect fractional matching to illustrate here are some balanced labeling examples for n 2'</li><li>'labeling is also odd l − v − l v displaystyle lvlv hence by tuckers lemma there are two adjacent vertices u v displaystyle uv with opposite labels assume wlog that the labels are l u 1 l v − 1 displaystyle lu1lv1 by the definition of l this means that in both g u displaystyle gu and g v displaystyle gv coordinate 1 is the largest coordinate in g u displaystyle gu this coordinate is positive while in g v displaystyle gv it is negative by the construction of the triangulation the distance between g u displaystyle gu and g v displaystyle gv is at most [UNK] displaystyle epsilon so in particular g u 1 − g v 1 g u 1 g v 1 ≤ [UNK] displaystyle gu1gv1gu1gv1leq epsilon since g u 1 displaystyle gu1 and g v 1 displaystyle gv1 have opposite signs and so g u 1 ≤ [UNK] displaystyle gu1leq epsilon but since the largest coordinate of g u displaystyle gu is coordinate 1 this means that g u k ≤ [UNK] displaystyle gukleq epsilon for each 1 ≤ k ≤ n displaystyle 1leq kleq n so g u ≤ c n [UNK] displaystyle guleq cnepsilon where c n displaystyle cn is some constant depending on n displaystyle n and the norm ⋅ displaystyle cdot which you have chosen the above is true for every [UNK] 0 displaystyle epsilon 0 since s n displaystyle sn is compact there must hence be a point u in which g u 0 displaystyle gu0 no subset of r n displaystyle mathbb r n is homeomorphic to s n displaystyle sn the ham sandwich theorem for any compact sets a1 an in r n displaystyle mathbb r n we can always find a hyperplane dividing each of them into two subsets of equal measure above we showed how to prove the borsuk – ulam theorem from tuckers lemma the converse is also true it is possible to prove tuckers lemma from the borsuk – ulam theorem therefore these two theorems are equivalent there are several fixedpoint theorems which come in three equivalent variants an algebraic topology variant a combinatorial variant and a setcovering variant each variant can be proved separately using totally different arguments but each variant can also be reduced to the other variants in its row additionally each result in the top row can be deduced from the one below it in the same column in the original theorem the domain'</li></ul> |
| 33 | <ul><li>'xenoglossy also written xenoglossia and sometimes also known as xenolalia is the supposedly paranormal phenomenon in which a person is allegedly able to speak write or understand a foreign language that they could not have acquired by natural means the term derives from the ancient greek xenos ξενος foreigner and glossa γλωσσα tongue or language the term xenoglossy was first used by french parapsychologist charles richet in 1905 claims of xenoglossy are found in the new testament and contemporary claims have been made by parapsychologists and reincarnation researchers such as ian stevenson doubts have been expressed that xenoglossy is an actual phenomenon and there is no scientifically admissible evidence supporting any of the alleged instances of xenoglossytwo types of xenoglossy are distinguished recitative xenoglossy is the use of an unacquired language incomprehensibly while responsive xenoglossy refers to the ability to intelligibly employ the unlearned language as if already acquired this phenomenon is mentioned in acts of the apostles chapter 2 at pentecost when the first disciples of jesus christ gathered together numbering one hundred and twenty and of the tongues of fire landed on each of them formalizing the coming of the spirit in an episode of inspired communication that allows the disciples to express themselves in languages other than galilean and to be understood by strangers several accounts of miraculous abilities of some people to read write speak or understand a foreign language as mentioned in the bible have been related in similar christian accounts in the middle ages similar claims were also made by some pentecostal theologians in 1901 claims of mediums speaking foreign languages were made by spiritualists in the 19th century more recent claims of xenoglossy have come from reincarnation researchers who have alleged that individuals were able to recall a language spoken in a past life some reports of xenoglossy have surfaced in the popular press such as czech speedway rider matej kus who in september 2007 supposedly awoke after a crash and was able to converse in perfect english however press reports of his fluency in english were based entirely on anecdotal stories told by his czech teammates xenoglossy has been claimed to have occurred during exorcisms canadian parapsychologist and psychiatrist at the university of virginia ian stevenson claimed there were a handful of cases that suggested evidence of xenoglossy these included two where a subject under hypnosis could'</li><li>'have lost but if asked directly in the context of a psychic reading whether they have such an item the client may be shocked and assume that the reader learned the information directly from the deceased loved one robert todd carroll notes in the skeptics dictionary that some would consider this to be cold reading the rainbow ruse is a crafted statement which simultaneously awards the subject a specific personality trait as well as the opposite of that trait with such a phrase a cold reader can cover all possibilities and appear to have made an accurate deduction in the mind of the subject despite the fact that a rainbow ruse statement is vague and contradictory this technique is used since personality traits are not quantifiable and also because nearly everybody has experienced both sides of a particular emotion at some time in their lives statements of this type include most of the time you are positive and cheerful but there has been a time in the past when you were very upset you are a very kind and considerate person but when somebody does something to break your trust you feel deepseated anger i would say that you are mostly shy and quiet but when the mood strikes you you can easily become the center of attentiona cold reader can choose from a variety of personality traits think of its opposite and then bind the two together in a phrase vaguely linked by factors such as mood time or potential the mentalist branch of the stagemagician community approves of reading as long as it is presented strictly as an artistic entertainment and one is not pretending to be psychicsome performers who use cold reading are honest about their use of the technique lynne kelly kari coleman ian rowland and derren brown have used these techniques at either private fortunetelling sessions or open forum talking with the dead sessions in the manner of those who claim to be genuine mediums only after receiving acclaim and applause from their audience do they reveal that they needed no psychic power for the performance only a sound knowledge of psychology and cold reading in an episode of his trick of the mind series broadcast in march 2006 derren brown showed how easily people can be influenced through cold reading techniques by repeating bertram forers famous demonstration of the personal validation fallacy or forer effect in a detailed review of four sittings conducted by medium tyler henry edward and susan gerbic reviewed all statements made by him on the tv show hollywood medium in their opinion not one statement made by henry was accurate yet each sitter felt that their reading was highly successful in interviews with each sitter after their sitting all four claimed specific statements made by henry but after reviewing the show it was shown that he had not made those statements each sit'</li><li>'al concluding that the ganzfeld studies have not been independently replicated and had thus failed to produce evidence for psi according to hyman reliance on metaanalysis as the sole basis for justifying the claim that an anomaly exists and that the evidence for it is consistent and replicable is fallacious it distorts what scientists mean by confirmatory evidence storm et al published a response to hyman claiming the ganzfeld experimental design has proved to be consistent and reliable but parapsychology is a struggling discipline that has not received much attention so further research on the subject is necessary rouder et al in 2013 wrote that critical evaluation of storm et als metaanalysis reveals no evidence for psi no plausible mechanism and omitted replication failuresa 2016 paper examined questionable research practices in the ganzfeld experiments and simulated how such practices could cause erroneous positive results there are several common criticisms of some or all of the ganzfeld experiments isolation – richard wiseman and others argue that not all of the studies used soundproof rooms so it is possible that when videos were playing the experimenter could have heard it and later given involuntary cues to the receiver during the selection process it could even have been possible that the receiver themselves could hear the video randomization – when subjects are asked to choose from a variety of selections there is an inherent bias to choose the first selection they are shown if the order in which they are shown the selections is randomized each time this bias will be averaged out the randomization procedures used in the experiment have been criticized for not randomizing satisfactorily the psi assumption – the assumption that any statistical deviation from chance is evidence for telepathy is highly controversial strictly speaking a deviation from chance is only evidence that either this was a rare statistically unlikely occurrence that happened by chance or something was causing a deviation from chance flaws in the experimental design are a common cause of this and so the assumption that it must be telepathy is fallaciouswriting in 1985 c e m hansel discovered weaknesses in the design and possibilities of sensory leakage in the ganzfeld experiments reported by carl sargent and other parapsychologists hansel concluded the ganzfeld studies had not been independently replicated and that esp is no nearer to being established than it was a hundred years agodavid marks in his book the psychology of the psychic 2000 has noted that during the autoganzfeld experiments the experimenter sat only fourteen feet from the senders room soundproofing tiles were eventually added but they were designed to absorb sound not to prevent transmission according to marks this was inadequate'</li></ul> |
| 22 | <ul><li>'water resources are natural resources of water that are potentially useful for humans for example as a source of drinking water supply or irrigation water 97 of the water on earth is salt water and only three percent is fresh water slightly over twothirds of this is frozen in glaciers and polar ice caps the remaining unfrozen freshwater is found mainly as groundwater with only a small fraction present above ground or in the air natural sources of fresh water include surface water under river flow groundwater and frozen water artificial sources of fresh water can include treated wastewater wastewater reuse and desalinated seawater human uses of water resources include agricultural industrial household recreational and environmental activities water resources are under threat from water scarcity water pollution water conflict and climate change fresh water is a renewable resource yet the worlds supply of groundwater is steadily decreasing with depletion occurring most prominently in asia south america and north america although it is still unclear how much natural renewal balances this usage and whether ecosystems are threatened natural sources of fresh water include surface water under river flow groundwater and frozen water surface water is water in a river lake or fresh water wetland surface water is naturally replenished by precipitation and naturally lost through discharge to the oceans evaporation evapotranspiration and groundwater recharge the only natural input to any surface water system is precipitation within its watershed the total quantity of water in that system at any given time is also dependent on many other factors these factors include storage capacity in lakes wetlands and artificial reservoirs the permeability of the soil beneath these storage bodies the runoff characteristics of the land in the watershed the timing of the precipitation and local evaporation rates all of these factors also affect the proportions of water loss humans often increase storage capacity by constructing reservoirs and decrease it by draining wetlands humans often increase runoff quantities and velocities by paving areas and channelizing the stream flow natural surface water can be augmented by importing surface water from another watershed through a canal or pipeline brazil is estimated to have the largest supply of fresh water in the world followed by russia and canada water from glaciers glacier runoff is considered to be surface water the himalayas which are often called the roof of the world contain some of the most extensive and rough high altitude areas on earth as well as the greatest area of glaciers and permafrost outside of the poles ten of asias largest rivers flow from there and more than a billion peoples livelihoods depend on them to complicate matters temperatures there are rising more rapidly than the global average in nepal the temperature has risen by 06 degrees celsius over the last decade whereas globally the earth has'</li><li>'##ng magnitude from leftright the finite water content vadose zone flux method works with any monotonic water retention curveunsaturated hydraulic conductivity relations such as brooks and corey clapp and hornberger and van genuchtenmualem the method might work with hysteretic water retention relations these have not yet been tested the finite water content method lacks the effect of soil water diffusion this omission does not affect the accuracy of flux calculations using the method because the mean of the diffusive flux is small practically this means that the shape of the wetting front plays no role in driving the infiltration the method is thus far limited to 1d in practical applications the infiltration equation was extended to 2 and quasi3 dimensions more work remains in extending the entire method into more than one dimension the paper describing this method was selected by the early career hydrogeologists network of the international association of hydrogeologists to receive the coolest paper published in 2015 award in recognition of the potential impact of the publication on the future of hydrogeology richards equation infiltration hydrology soil moisture velocity equation'</li><li>'stress distribution in soil is a function of the type of soil the relative rigidity of the soil and the footing and the depth of foundation at level of contact between footing and soilthe estimation of vertical stresses at any point in a soil mass due to external loading is essential to the prediction of settlements of buildings bridges and pressure the solution to the problem of calculating the stresses in an elastic half space subjected to a vertical point load at the surface will be of value in estimating the stresses induced in a deposit of soil whose depth is large compared to the dimensions of that part of the surface that is loaded δ σ z − 3 p 2 π r 2 cos 3 θ displaystyle delta sigma zfrac 3p2pi r2cos 3theta δ σ r p 2 π r 2 − 3 cos θ sin 2 θ 1 − 2 μ 1 cos θ displaystyle delta sigma rfrac p2pi r23cos theta sin 2theta frac 12mu 1cos theta δ σ t p 2 π r 2 1 − 2 μ cos θ − 1 1 cos θ displaystyle delta sigma tfrac p2pi r212mu cos theta frac 11cos theta δ τ − 3 p 2 π r 2 cos 2 θ sin θ displaystyle delta tau frac 3p2pi r2cos 2theta sin theta cos θ z r displaystyle cos theta frac zr r r 2 z 2 displaystyle rsqrt r2z2 δ σ z − 3 p z 3 2 π r 5 − 3 p 2 π z 3 r 2 z 2 5 2 − 3 p 2 π z 2 1 r z 2 5 2 displaystyle delta sigma zfrac 3pz32pi r5frac 3p2pi frac z3r2z252frac 3p2pi z2left1leftfrac rzright2rightfrac 52 σ q 1 − 1 r z 2 1 3 2 displaystyle sigma q1frac 1frac rz2132'</li></ul> |
| 3 | <ul><li>'##ilise and suggest other technologies such as mobile phones or psion organisers as such feedback studies involve asynchronous communication between the participants and the researchers as the participants ’ data is recorded in their diary first and then passed on to the researchers once completefeedback studies are scalable that is a largescale sample can be used since it is mainly the participants themselves who are responsible for collecting and recording data in elicitation studies participants capture media as soon as the phenomenon occurs the media is usually in the form of a photograph but can be in other different forms as well and so the recording is generally quick and less effortful than feedback studies these media are then used as prompts and memory cues to elicit memories and discussion in interviews that take place much later as such elicitation studies involve synchronous communication between the participants and the researchers usually through interviewsin these later interviews the media and other memory cues such as what activities were done before and after the event can improve participants ’ episodic memory in particular photos were found to elicit more specific recall than all other media types there are two prominent tradeoffs between each type of study feedback studies involve answering questions more frequently and in situ therefore enabling more accurate recall but more effortful recording in contrast elicitation studies involve quickly capturing media in situ but answering questions much later therefore enabling less effortful recording but potentially inaccurate recall diary studies are most often used when observing behavior over time in a natural environment they can be beneficial when one is looking to find new qualitative and quantitative data advantages of diary studies are numerous they allow collecting longitudinal and temporal information reporting events and experiences in context and inthemoment participants to diary their behaviours thoughts and feelings inthemoment thereby minimising the potential for post rationalisation determining the antecedents correlations and consequences of daily experiences and behaviors there are some limitations of diary studies mainly due to their characteristics of reliance on memory and selfreport measures there is low control low participation and there is a risk of disturbing the action in feedback studies it can be troubling and disturbing to write everything down the validity of diary studies rests on the assumption that participants will accurately recall and record their experiences this is somewhat more easily enabled by the fact that diaries are completed media is captured in a natural environment and closer in realtime to any occurrences of the phenomenon of interest however there are multiple barriers to obtaining accurate data such as social desirability bias where participants may answer in a way that makes them appear more socially desirable this may be more prominent in longitudinal studies'</li><li>'indigenous media can reference film video music digital art and sound produced and created by and for indigenous people it refers to the use of communication tools pathways and outlets by indigenous peoples for their own political and cultural purposes indigenous media is the use of modern media techniques by indigenous peoples also called fourth world peoples indigenous media helps communities in their fight against cultural extinction economic and ecological decline and forced displacement most often in the field of indigenous media the creators of the media are also the consumers together with the neighboring communities sometimes the media is also received by institutions and film festivals located far away from the production location like the american indian film festival the production is usually locally based low budget and small scale but it can also be sponsored by different support groups and governments 34 – 35 the concept of indigenous media could be extended to first world alternative media like aids activist video the research of indigenous media and the international indigenous movement in the process of globalization develop in parallel in the second half of the 20th century united nations agencies including the united nations working group on indigenous populations wgip led the movement the united nations general assembly adopted a declaration aimed at protecting the rights of indigenous peoples in 2007 the theoretical development of indigenous media research first occurred in anthropology in 1980 it was accompanied by a critical research method that diverged from postcolonialism and poststructuralism the newer method attempted to minimize the power imbalance between the researcher and the researched leading up to this ethnographic films that gave photographic techniques to locals can be traced back as far as the navajo project in 1960 the project was the pioneering work of sol worth and john adair to which the origin of a new anthropological language and style of ethnography can be attributedhowever the indigenous media movement was not a significant phenomenon for another decade the widely recognized start of the new media movement was a collaboration between american anthropologist eric michaels and australia ’ s warlpiri aboriginal broadcasting this new type of collaborative anthropological project exemplified a change from a simple observation of the life of the indigenous people to a cultural record by the indigenous people themselves following the warlpiri project the brazilian kayapo village project of vincent carelli and terence turner and the indigenous series by maori producer barry barclay in new zealand have been important milestones in the development of indigenous media however it was faye ginsburg an american anthropologist who laid the theoretical foundation for the study of indigenous media her research in 1991 expounded the faustian dilemma between technology and tribal life and inspired later indigenous media researchers the important theories of recent indigenous media studies have highlighted the dynamic relationship between local indigenous communities and their countries and globalization lorna roth'</li><li>'results did not predict any prejudices towards black individuals this study used emic approaches of study by conducting interviews with the locals and etic approaches by giving participants generalized personality tests exonym and endonymother explorations of the differences between reality and humans models of it blind men and an elephant emic and etic units internalism and externalism map – territory relation creswell j w 1998 qualitative enquiry and research design choosing among five traditions london uk sage dundes alan 1962 from etic to emic units in the structural study of folktales journal of american folklore 75 296 95 – 105 doi102307538171 jstor i223629 goodenough ward 1970 describing a culture description and comparison in cultural anthropology cambridge uk cambridge university press pp 104 – 119 isbn 9780202308616 harris marvin 1976 history and significance of the emicetic distinction annual review of anthropology 5 329 – 350 doi101146annurevan05100176001553 harris marvin 1980 chapter two the epistemology of cultural materialism cultural materialism the struggle for a science of culture new york random house pp 29 – 45 isbn 9780759101340 headland thomas pike kenneth harris marvin eds 1990 emics and etics the insideroutsider debate sage jahoda g 1977 y j poortinga ed in pursuit of the emicetic distinction can we ever capture it basic problems in crosscultural psychology pp 55 – 63 jardine nick 2004 etics and emics not to mention anemics and emetics in the history of the sciences history of science 42 3 261 – 278 bibcode2004hissc42261j doi101177007327530404200301 s2cid 141081973 jingfeng xia 2013 an anthropological emicetic perspective on open access practices academic search premier kitayama shinobu cohen dov 2007 handbook of cultural psychology new york guilford press kottak conrad 2006 mirror for humanity new york mcgraw hill isbn 9780078034909 nattiez jeanjacques 1987 musicologie generale et semiologue music and discourse toward a semiology of music translated by carolyn abbate isbn 9780691027142 pike kenneth lee ed 1967 language in relation to a unified theory of structure of human behavior 2nd ed the hague netherlands mouton'</li></ul> |
| 34 | <ul><li>'democratic education is a type of formal education that is organized democratically so that students can manage their own learning and participate in the governance of their school democratic education is often specifically emancipatory with the students voices being equal to the teachersthe history of democratic education spans from at least the 17th century while it is associated with a number of individuals there has been no central figure establishment or nation that advocated democratic education in 1693 john locke published some thoughts concerning education in describing the teaching of children he declares none of the things they are to learn should ever be made a burthen to them or imposd on them as a task whatever is so proposd presently becomes irksome the mind takes an aversion to it though before it were a thing of delight or indifferency let a child but be orderd to whip his top at a certain time every day whether he has or has not a mind to it let this be but requird of him as a duty wherein he must spend so many hours morning and afternoon and see whether he will not soon be weary of any play at this rate jeanjacques rousseaus book of advice on education emile was first published in 1762 emile the imaginary pupil he uses for illustration was only to learn what he could appreciate as useful he was to enjoy his lessons and learn to rely on his own judgement and experience the tutor must not lay down precepts he must let them be discovered wrote rousseau and urged him not make emile learn science but let him discover it he also said that we should not substitute books for personal experience because this does not teach us to reason it teaches us to use other peoples reasoning it teaches us to believe a great deal but never to know anything while locke and rousseau were concerned only with the education of the children of the wealthy in the 19th century leo tolstoy set up a school for peasant children this was on his own estate at yasnaya polyana russia in the late 19th century he tells us that the school evolved freely from principles introduced by teachers and pupils that in spite of the preponderating influence of the teacher the pupil had always had the right not to come to school or having come not to listen to the teacher and that the teacher had the right not to admit a pupil and was able to use all the influence he could muster to win over the community where the children were always in the majority dom sierot in 1912 janusz korczak founded dom sierot the jewish orphanage in warsaw which was run on democratic lines in 1940 dom si'</li><li>'is done through six points of reference learners studentsteachers in dialogue approach their acts of knowing as grounded in individual experience and circumstance learners approach the historical and cultural world as a transformable reality shaped by human ideological representations of reality learners make connections between their own conditions and the conditions produced through the making of reality learners consider the ways that they can shape this reality through their methods of knowing this new reality is collective shared and shifting learners develop literacy skills that put their ideas into print thus giving potency to the act of knowing learners identify the myths in the dominant discourse and work to destabilize these myths ending the cycle of oppression the montessori method developed by maria montessori is an example of problemposing education in an early childhood model ira shor a professor of composition and rhetoric at cuny who has worked closely with freire also advocates a problem posing model in his use of critical pedagogy he has published on the use of contract grading the physical setup of the classroom and the political aspects of student and teacher rolesjames d kirylo in his book paulo freire the man from recife reiterated freires thought and stated that a problemposing education is one where human beings are viewed as conscious beings who are unfinished yet in process of becoming other advocates of problemposing critical pedagogy include henry giroux peter mclaren and bell hooks inquirybased learning problembased learning unschooling'</li><li>'ambiguity tolerance – intolerance is a psychological construct that describes the relationship that individuals have with ambiguous stimuli or events individuals view these stimuli in a neutral and open way or as a threat ambiguity tolerance – intolerance is a construct that was first introduced in 1949 through the work of else frenkelbrunswik while researching ethnocentrism in children and was perpetuated by her research of ambiguity intolerance in connection to authoritarian personality it serves to define and measure how well an individual responds when presented with an event that results in ambiguous stimuli or situations in her study she tested the notion that children who are ethnically prejudiced also tend to reject ambiguity more so than their peers she studied children who ranked high and low on prejudice in a story recall test and then studied their responses to an ambiguous disc shaped figure the children who scored high in prejudice were expected to take longer to give a response to the shape less likely to make changes on their response and less likely to change their perspectives a study by kenny and ginsberg 1958 retesting frenkelbrunswiks original connection of ambiguity intolerance to ethnocentrism and authoritarian personality found that the results were unreplicable however it was discussed that this may be due to the fact that at the time the study was done incorrect methodology was used and that there lacked a concrete definition as to what the construct was most of the research on this subject was completed in the two decades after the publication of the authoritarian personality however the construct is still studied in psychological research today budner gives three examples as to what could be considered ambiguous situations a situation with no familiar cues a situation in which there are many cues to be taken into consideration and a situation in which cues suggest the existence of different structures to be adhered to there have been many attempts to conceptualize the construct of ambiguity tolerance – intolerance as to give researchers a more standard concept to work with many of these conceptualizations are based on the work of frenkelbrunswik budner 1962 defines the construct as the following intolerance of ambiguity may be defined as the tendency to perceive ie interpret ambiguous situations as sources of threat tolerance of ambiguity as the tendency to perceive ambiguous situations as desirableadditionally bochner 1965 categorized attributes given by frenkelbrunswiks theory of individuals who are intolerant to ambiguity the nine primary characteristics describe intolerance of ambiguity and are as follows need for categorization need for certainty inability to allow good and bad traits to exist in the same person'</li></ul> |
| 31 | <ul><li>'in philosophy transcendence is the basic ground concept from the words literal meaning from latin of climbing or going beyond albeit with varying connotations in its different historical and cultural stages it includes philosophies systems and approaches that describe the fundamental structures of being not as an ontology theory of being but as the framework of emergence and validation of knowledge of being these definitions are generally grounded in reason and empirical observation and seek to provide a framework for understanding the world that is not reliant on religious beliefs or supernatural forces transcendental is a word derived from the scholastic designating the extracategorical attributes of beings in religion transcendence refers to the aspect of gods nature and power which is wholly independent of the material universe beyond all physical laws this is contrasted with immanence where a god is said to be fully present in the physical world and thus accessible to creatures in various ways in religious experience transcendence is a state of being that has overcome the limitations of physical existence and by some definitions has also become independent of it this is typically manifested in prayer seance meditation psychedelics and paranormal visions it is affirmed in various religious traditions concept of the divine which contrasts with the notion of a god or the absolute that exists exclusively in the physical order immanentism or indistinguishable from it pantheism transcendence can be attributed to the divine not only in its being but also in its knowledge thus god may transcend both the universe and knowledge is beyond the grasp of the human mind although transcendence is defined as the opposite of immanence the two are not necessarily mutually exclusive some theologians and metaphysicians of various religious traditions affirm that a god is both within and beyond the universe panentheism in it but not of it simultaneously pervading it and surpassing it the ethics of baruch spinoza used the expression transcendental terms in latin termini transcendentales to indicate concepts like being thing something which are so general not to be included in the definitions of species genus and category in modern philosophy immanuel kant introduced a new term transcendental thus instituting a new third meaning in his theory of knowledge this concept is concerned with the condition of possibility of knowledge itself he also opposed the term transcendental to the term transcendent the latter meaning that which goes beyond transcends any possible knowledge of a human being for him transcendental meant knowledge about our cognitive faculty with regard to how objects are possible a priori i call all knowledge transcendental if it is occupied not with objects'</li><li>'atoms in molecules — collision theory — ligand field theory successor to crystal field theory — variational transitionstate theory — benson group increment theory — specific ion interaction theory climatology climate change theory general study of climate changes and anthropogenic climate change acc global warming agw theories due to human activity computer science automata theory — queueing theory cosmology big bang theory — cosmic inflation — loop quantum gravity — superstring theory — supergravity — supersymmetric theory — multiverse theory — holographic principle — quantum gravity — mtheory economics macroeconomic theory — microeconomic theory — law of supply and demand education constructivist theory — critical pedagogy theory — education theory — multiple intelligence theory — progressive education theory engineering circuit theory — control theory — signal theory — systems theory — information theory film film theory geology plate tectonics humanities critical theory jurisprudence or legal theory natural law — legal positivism — legal realism — critical legal studies law see jurisprudence also case theory linguistics xbar theory — government and binding — principles and parameters — universal grammar literature literary theory mathematics approximation theory — arakelov theory — asymptotic theory — bifurcation theory — catastrophe theory — category theory — chaos theory — choquet theory — coding theory — combinatorial game theory — computability theory — computational complexity theory — deformation theory — dimension theory — ergodic theory — field theory — galois theory — game theory — gauge theory — graph theory — group theory — hodge theory — homology theory — homotopy theory — ideal theory — intersection theory — invariant theory — iwasawa theory — ktheory — kktheory — knot theory — ltheory — lie theory — littlewood – paley theory — matrix theory — measure theory — model theory — module theory — morse theory — nevanlinna theory — number theory — obstruction theory — operator theory — order theory — pcf theory — perturbation theory — potential theory — probability theory — ramsey theory — rational choice theory — representation theory — ring theory — set theory — shape theory — small cancellation theory — spectral theory — stability theory — stable theory — sturm – liouville theory — surgery theory — twistor theory — yang – mills theory music music theory philosophy proof theory — speculative reason — theory of truth — type theory — value theory — virtue theory physics acoustic theory — antenna theory — atomic theory — bcs theory — conformal field theory — dirac hole theory — dynamo theory — landau theory — mtheory — perturbation theory — theory'</li><li>'##ism turned this world on its head he argues for the nominalists all real being was individual or particular and universals were thus mere fictionsanother scholar victor bruno follows the same line according to bruno nominalism is one of the first signs of rupture in the medieval system the dismembering of the particulars the dangerous attribution to individuals to a status of totalization of possibilities in themselves all this will unfold in an existential fissure that is both objective and material the result of this fissure will be the essays to establish the nation state indian philosophy encompasses various realist and nominalist traditions certain orthodox hindu schools defend the realist position notably purva mimamsa nyaya and vaisheshika maintaining that the referent of the word is both the individual object perceived by the subject of knowledge and the universal class to which the thing belongs according to indian realism both the individual and the universal exist objectively with the second underlying the former buddhists take the nominalist position especially those of the sautrantika and yogacara schools they were of the opinion that words have as referent not true objects but only concepts produced in the intellect these concepts are not real since they do not have efficient existence that is causal powers words as linguistic conventions are useful to thought and discourse but even so it should not be accepted that words apprehend reality as it is dignaga formulated a nominalist theory of meaning called apohavada or theory of exclusions the theory seeks to explain how it is possible for words to refer to classes of objects even if no such class has an objective existence dignagas thesis is that classes do not refer to positive qualities that their members share in common on the contrary universal classes are exclusions apoha as such the cow class for example is composed of all exclusions common to individual cows they are all nonhorse nonelephant etc nominalism arose in reaction to the problem of universals specifically accounting for the fact that some things are of the same type for example fluffy and kitzler are both cats or the fact that certain properties are repeatable such as the grass the shirt and kermit the frog are green one wants to know by virtue of what are fluffy and kitzler both cats and what makes the grass the shirt and kermit green the platonist answer is that all the green things are green in virtue of the existence of a universal a single abstract thing that in this case is a part of all the green things with respect to the color of the grass the'</li></ul> |
| 41 | <ul><li>'along streams and rivers through parks and across commons another type is the alley normally providing access to the rear of properties or connecting builtup roads not easily reached by vehicles towpaths are another kind of urban footpath but they are often shared with cyclists a typical footpath in a park is found along the seawall in stanley park vancouver british columbia canada this is a segregated path with one lane for skaters and cyclists and the other for pedestriansin the us and canada where urban sprawl has begun to strike even the most rural communities developers and local leaders are currently striving to make their communities more conducive to nonmotorized transportation through the use of less traditional paths the robert wood johnson foundation has established the active living by design program to improve the livability of communities in part through developing trails the upper valley trails alliance has done similar work on traditional trails while the somerville community path and related paths are examples of urban initiatives in st johns newfoundland canada the grand concourse is an integrated walkway system that has over 160 kilometers 99 mi of footpaths which link every major park river pond and green space in six municipalities in london england there are several longdistance walking routes which combine footpaths and roads to link green spaces these include the capital ring london outer orbital path and the jubilee walkway the use of which have been endorsed by transport for london an alley is a narrow usually paved pedestrian path often between the walls of buildings in towns and cities this type is usually short and straight and on steep ground can consist partially or entirely of steps in older cities and towns in europe alleys are often what is left of a medieval street network or a right of way or ancient footpath similar paths also exist in some older north american towns and cities in some older urban development in north america lanes at the rear of houses to allow for deliveries and garbage collection are called alleys alleys may be paved or unpaved and a blind alley is a culdesac some alleys are roofed because they are within buildings such as the traboules of lyon or when they are a pedestrian passage through railway embankments in britain the latter follow the line of rightsof way that existed before the railway was built because of topography steps stairs are the predominant form of alley in hilly cities and towns this includes pittsburgh see steps of pittsburgh cincinnati see steps of cincinnati portland oregon seattle and san francisco in the united states as well as hong kong and rome footpaths and other rights of way have been combined and new paths created so as to produce longdistance walking routes in a number of countries these'</li><li>'the minot area growth through investment and cooperation fund or magic fund is a growth fund financed through a one percent sales tax in the city of minot north dakota the fund was approved by voters on may 1 1990 and the money is used for economic development capital improvements and property tax relief as of 2012 the magic fund has invested over 33 million into 200 projects in 44 communities forty percent of the one percent tax is earmarked for economic development and is used to help finance relocations startups and expansions in the minot area minot area development corporation the lead economic development agency for the city of minot targets primary sector businesses such as those in valueadded agriculture knowledgebased business and the energy industry the availability of magic funds makes minot more appealing to businesses the magic fund is very progressive in that it was one of the first growth funds in the state of north dakota and the first one to be used regionally when the magic fund was originally established it was designed to operate with minimal guidelines to allow for the high level of flexibility necessary when assembling financing and incentive packages to benefit potential businesses and the community of minot this nonrestrictive nature of the fund has been a source of some criticism though local leadership acknowledges that throughout the life of the magic fund it has been a challenge maintain openness with the public about specific spending while at the same time respecting the confidentiality of business information leaders are striving however to keep communications clearin 2005 new magic fund guidelines were set in place to clearly define “ full time ” and to require a breakdown — not an average of — salaries of proposed positions more recently in october 2008 the guidelines of the magic fund underwent public review and area residents were encouraged to offer suggestions suggestions included making magic funds available for private sector projects such as housing recreation and childcare or using the money for infrastructure purposes such as streets and sewer in order to encourage more housing projects after consideration the guidelines review committee decided to continue using magic funding for businessrelated projects the initial creation of the magic fund in may 1990 established it through 2006 and come june 2004 city voters approved an extension of the 1 city sales tax through the year 2014 the magic fund has a rich history of aiding economic development in the minot region and study after study shows the local economy has benefited drastically from its availability historically magic funds have been used in three main areas of primary sector economic development knowledgebased employment agriculture and energy five of the ten largest employers conducting business in minot today were recruited using magic funds choice hotels international was one of the first businesses to be recruited using'</li><li>'##tes to solve problems everything promised by compact cities can be delivered'</li></ul> |
| 16 | <ul><li>'physiographic regions are a means of defining earths landforms into distinct mutually exclusive areas independent of political boundaries it is based upon the classic threetiered approach by nevin m fenneman in 1916 that separates landforms into physiographic divisions physiographic provinces and physiographic sectionsthe classification mechanism has become a popular geographical tool in the united states indicated by the publication of a usgs shapefile that maps the regions of the original work and the national park servicess use of the terminology to describe the regions in which its parks are locatedoriginally used in north america the model became the basis for similar classifications of other continents during the early 1900s the study of regionalscale geomorphology was termed physiography physiography later was considered to be a portmanteau of physical and geography and therefore synonymous with physical geography and the concept became embroiled in controversy surrounding the appropriate concerns of that discipline some geomorphologists held to a geological basis for physiography and emphasized a concept of physiographic regions while a conflicting trend among geographers was to equate physiography with pure morphology separated from its geological heritage in the period following world war ii the emergence of process climatic and quantitative studies led to a preference by many earth scientists for the term geomorphology in order to suggest an analytical approach to landscapes rather than a descriptive one in current usage physiography still lends itself to confusion as to which meaning is meant the more specialized geomorphological definition or the more encompassing physical geography definition for the purposes of physiographic mapping landforms are classified according to both their geologic structures and histories distinctions based on geologic age also correspond to physiographic distinctions where the forms are so recent as to be in their first erosion cycle as is generally the case with sheets of glacial drift generally forms which result from similar histories are characterized by certain similar features and differences in history result in corresponding differences of form usually resulting in distinctive features which are obvious to the casual observer but this is not always the case a maturely dissected plateau may grade without a break from rugged mountains on the one hand to mildly rolling farm lands on the other so also forms which are not classified together may be superficially similar for example a young coastal plain and a peneplain in a large number of cases the boundary lines are also geologic lines due to differences in the nature or structure of the underlying rocks the history of physiography itself is at best a complicated effort much of'</li><li>'##ythagoras contrary to popular belief most educated people in the middle ages did not believe the earth was flat this misconception is often called the myth of the flat earth as evidenced by thinkers such as thomas aquinas the european belief in a spherical earth was widespread by this point in time prior to circumnavigation of the planet and the introduction of space flight belief in a spherical earth was based on observations of the secondary effects of the earths shape and parallels drawn with the shape of other planets humans have commonly traveled for business pleasure discovery and adventure all made easier in recent human history as a result of technologies like cars trains planes and ships land navigation is an aspect of travel and refers to progressing through unfamiliar terrain using navigational tools like maps with references to terrain a compass or satellite navigation navigation on land is often facilitated by reference to landmarks – enduring and recognizable natural or artificial features that stand out from their nearby environment and are often visible from long distances natural landmarks can be characteristic features such as mountains or plateaus with examples including table mountain in south africa mount ararat in turkey the grand canyon in the united states uluru in'</li><li>'##width extra versatility compared to the strahler number however unlike the strahler number the pathwidth is defined only for the whole graph and not separately for each node in the graph main stem of a river typically found by following the branch with the highest strahler number pfafstetter coding system'</li></ul> |
| 24 | <ul><li>'glenstone is a private contemporary art museum in potomac maryland founded in 2006 by american billionaire mitchell rales and his wife emily wei rales the museums exhibitions are drawn from a collection of about 1300 works from postworld war ii artists around the world it is the largest private contemporary art museum in the united states holding more than 46 billion in net assets and is noted for its setting in a broad natural landscape glenstones original building was designed by charles gwathmey with it being expanded several times on its 230acre 93 ha campus its most significant expansion was finished in the late 2010s with outdoor sculpture installations landscaping a new complex designed by thomas phifer and an environmental center being added glenstone has been compared to other private museums such as the frick collection and the phillips collection the museum is free to the public with it seeing over 100000 visitors in 2022 in 1986 billionaire american businessman mitchell rales purchased the property in potomac maryland to build a home starting in 1990 rales began collecting art for that home following a neardeath accident on a helicopter trip in russia rales decided to take on a philanthropic project which became the establishment of a private contemporary art museum built on land that was formerly a fox hunting club glenstone is named for the nearby glen road and because of stone quarries located in the vicinity located 15 miles 24 km from downtown washington dc the museums initial 30000squarefoot 2800 m2 modernist limestone gallery opened in 2006 and admitted visitors two days a week in its first seven years the museum admitted only 10000 visitorsthough several smaller expansions took place in the years after the museums opening the largest expansion was announced in 2013 and was completed in 2018 opening to the public on october 4 2018 with a cost of approximately 219 million the expansion increased the size of the museums gallery space by a factor of five increasing the propertys size by 130 acres 53 ha and included substantial landscaping changes with the expansion glenstone became the largest private contemporary art museum in the united states in 2019 the expansion was named as a museum opening of the year by apollowith the expansion glenstone opened to the public with free tickets available online in the year following the expansion glenstone admitted nearly 100000 visitorsin 2015 glenstone was one of several private museums questioned by the us senate finance committee over its nonprofit tax status after reporting from the new york times had questioned the validity of nonprofit tax status for institutions like glenstone which at the time welcomed very few visitors the committee sought to investigate whether highvalue individuals and families were using private museums as a form of tax shelter committee chairman senator orrin hatch said'</li><li>'in consistently producing organic litter is believed to be more important in reducing erosion than its direct speedreducing effects on raindrops nevertheless gardens are less effective than natural forests in erosion reduction harvesting of rice — the dominant staple of indonesia — influences the use of pekarangans in some ways production in the gardens decreases during riceharvesting season but peaks during the rest of the year lowerincome villagers benefit from the consistent productivity of starch crops in the gardens especially in a period of food shortage prerice harvest or after a failed rice harvest by droughtsettlement dynamics affect pekarangans in various ways expansion of settlements to new lands caused by population growth is the cause of the wide presence of food crops in newly made pekarangans people who resettled via the indonesian transmigration program might support plant diversity in the gardens in the places they migrate to plant species brought by internal migrants need to adapt well to the local environmentcommercialization fragmentation and urbanization are major hazards to pekarangans plant diversity these change the organic cycles within the gardens threatening their ecological sustainability commercialization requires a systemic change of crop planting to optimize and produce more crops a pekarangans owner must specialize in its crops making a small number of crops dominate the garden some owners turn them into monoculture gardens fragmentation stems from the traditional system of inheritance consequences from the reduction of plant diversity include the loss of canopy structures and organic litter resulting in less protection of the gardens soil loss of pestcontrol agents increasing the use of pesticides loss of production stability loss of nutrients diversity and the disappearance of yieldssharing culture despite urbanizations negative effect in reducing their plant diversity it increases that of the ornamental plantsa case study of home gardens in napu valley central sulawesi shows that the decrease in soil protection is caused by insufficient soil fertility management regular weeding and waste burning dumping waste in garbage pits instead of using it for compost and spread of inorganic waste the decrease of soil fertility worsens the decrease of crop diversity in the gardens products from pekarangans have multiple uses for example a coconut tree can provide food oil fuel and building materials and also be used in rituals and ceremonies the gardens plants are known for their products nutritional benefits and diversity while rice is low in vitamins a and c products from the gardens offer an abundance of them pekarangans with more perennial crops tend to create more carbohydrates and proteins and those with more annual plants tend to create more portions of vitamin a pekarangans also act as a source of fire'</li><li>'the german fountain turkish alman cesmesi german deutscher brunnen is a gazebo styled fountain in the northern end of old hippodrome sultanahmet square istanbul turkey and across from the mausoleum of sultan ahmed i it was constructed to commemorate the second anniversary of german emperor wilhelm iis visit to istanbul in 1898 it was built in germany then transported piece by piece and assembled in its current site in 1900 the neobyzantine style fountains octagonal dome has eight marble columns and domes interior is covered with golden mosaics the idea of great palace of constantinoples empire lodge kathisma being on the site of the german fountains conflicts with the view that carceres gates of hippodrome was found on the site of the fountain however the hypothesis of carceres gates being on the site enforces the view that quadriga of lysippos was used to stand on the site of the german fountainduring his reign as german emperor and king of prussia wilhelm ii visited several european and eastern countries his trip started in istanbul ottoman empire on 18 october 1898 during the reign of abdulhamid ii according to peter hopkirk the visit to ottoman empire was an ego trip and also had longterm motivations the emperors primary motivation for visiting was to construct the baghdad railway which would run from berlin to the persian gulf and would further connect to british india through persia this railway could provide a short and quick route from europe to asia and could carry german exports troops and artillery at the time the ottoman empire could not afford such a railway and abdulhamid ii was grateful to wilhelms offer but was suspicious over the german motives abdulhamid iis secret service believed that german archeologists in the emperors retinue were in fact geologists with designs on the oil wealth of the ottoman empire later the secret service uncovered a german report which noted that the oilfields in mosul northern mesopotamia were richer than that in the caucuses in his first visit wilhelm secured the sale of germanmade rifles to ottoman army and in his second visit he secured a promise for german companies to construct the istanbulbaghdad railway the german government constructed the german fountain for wilhelm ii and empress augustas 1898 istanbul visit according to afife batur the fountains plans were drawn by architect spitta and constructed by architect schoele also german architect carlitzik and italian architect joseph anthony worked on this projectaccording to the ottoman inscription the fountains construction started in the hejira 1319 1898 – 1899 although the inauguration of the fountain was planned to take place on 1'</li></ul> |
| 10 | <ul><li>'inhibits the growth of some harmful gramnegative and grampositive bacteria along with yeasts molds and protozoa l reuteri can secrete sufficient amounts of reuterin to inhibit the growth of harmful gut organisms without killing beneficial gut bacteria allowing l reuteri to remove gut invaders while keeping normal gut flora intactreuterin is watersoluble effective in a wide range of ph resistant to proteolytic and lipolytic enzymes and has been studied as a food preservative or auxiliary therapeutic agentreuterin as an extracted compound has been shown capable of killing escherichia coli o157h7 and listeria monocytogenes with the addition of lactic acid increasing its efficacy it has also been demonstrated to kill escherichia coli o157h7 when produced by l reuteri'</li><li>'thus can affect biological function of the fsl lipids in fsl kode constructs include diacyldiakyl eg dope sterols eg cholesterol ceramides one of the important functions of an fsl construct is that it can optimise the presentation of antigens both on cell surfaces and solidphase membranes this optimisation is achieved primarily by the spacer and secondarily by the lipid tail in a typical immunoassay the antigen is deposited directly onto the microplate surface and binds to the surface either in a random fashion or in a preferred orientation depending on the residues present on the surface of this antigen usually this deposition process is uncontrolled in contrast the fsl kode construct bound to a microplate presents the antigen away from the surface in an orientation with a high level of exposure to the environment furthermore typical immunoassays use recombinant peptides rather than discrete peptide antigens as the recombinant peptide is many times bigger than the epitope of interest a lot of undesired and unwanted peptide sequences are also represented on the microplate these additional sequences may include unwanted microbial related sequences as determined by a blast analysis that can cause issues of low level crossreactivity often the mechanism by which an immunoassay is able to overcome this low level activity is to dilute the serum so that the low level microbial reactive antibodies are not seen and only highlevel specific antibodies result in an interpretable result in contrast fsl kode constructs usually use specifically selected peptide fragments up to 40 amino acids thereby overcoming crossreactivity with microbial sequences and allowing for the use of undiluted serum which increases sensitivity the f component can be further enhanced by presentation of it in multimeric formats and with specific spacing the four types of multimeric format include linear repeating units linear repeating units with spacing clusters and branching fig 4 the fsl kode construct by nature of its composition in possessing both hydrophobic and hydrophilic regions are amphiphilic or amphipathic this characteristic determines the way in which the construct will interact with surfaces when present in a solution they may form simple micelles or adopt more complex bilayer structures with two simplistic examples shown in fig 5a more complex structures are expected the actual nature of fsl micelles has not been determined however based on normal structural function of micelles it is expected that it will be determined in part by the combination of functional group spacer and lipid together'</li><li>'##n1 il1 etc which do not have a signal sequence they do not use the classical ergolgi pathway these are secreted through various nonclassical pathways at least four nonclassical unconventional protein secretion pathways have been described they include direct protein translocation across the plasma membrane likely through membrane transport proteins blebbing lysosomal secretion release via exosomes derived from multivesicular bodiesin addition proteins can be released from cells by mechanical or physiological wounding and through nonlethal transient oncotic pores in the plasma membrane induced by washing cells with serumfree media or buffers many human cell types have the ability to be secretory cells they have a welldeveloped endoplasmic reticulum and golgi apparatus to fulfill this function tissues that produce secretions include the gastrointestinal tract which secretes digestive enzymes and gastric acid the lungs which secrete surfactants and sebaceous glands which secrete sebum to lubricate the skin and hair meibomian glands in the eyelid secrete meibum to lubricate and protect the eye secretion is not unique to eukaryotes – it is also present in bacteria and archaea as well atp binding cassette abc type transporters are common to the three domains of life some secreted proteins are translocated across the cytoplasmic membrane by the secyeg translocon one of two translocation systems which requires the presence of an nterminal signal peptide on the secreted protein others are translocated across the cytoplasmic membrane by the twinarginine translocation pathway tat gramnegative bacteria have two membranes thus making secretion topologically more complex there are at least six specialized secretion systems in gramnegative bacteria many secreted proteins are particularly important in bacterial pathogenesis type i secretion is a chaperone dependent secretion system employing the hly and tol gene clusters the process begins as a leader sequence on the protein to be secreted is recognized by hlya and binds hlyb on the membrane this signal sequence is extremely specific for the abc transporter the hlyab complex stimulates hlyd which begins to uncoil and reaches the outer membrane where tolc recognizes a terminal molecule or signal on hlyd hlyd recruits tolc to the inner membrane and hlya is excreted outside of the outer membrane via a longtunnel protein channel type i secretion system transports various molecules from ions drugs to'</li></ul> |
| 1 | <ul><li>'first to form followed by the oblique shock shock diamonds are most commonly associated with jet and rocket propulsion but they can form in other systems shock diamonds can be seen during gas pipeline blowdowns because the gas is under high pressure and exits the blowdown valve at extreme speeds when artillery pieces are fired gas exits the cannon muzzle at supersonic speeds and produces a series of shock diamonds the diamonds cause a bright muzzle flash which can expose the location of gun emplacements to the enemy it was found that when the ratio between the flow pressure and atmospheric pressure is close which can be achieved with a flash suppressor the shock diamonds were greatly minimized adding a muzzle brake to the end of the muzzle balances the pressures and prevents shock diamonds 41 some radio jets powerful jets of plasma that emanate from quasars and radio galaxies are observed to have regularlyspaced knots of enhanced radio emissions 68 the jets travel at supersonic speed through a thin atmosphere of gas in space 51 so it is hypothesized that these knots are shock diamonds index of aviation articles plume hydrodynamics rocket engine nozzle'</li><li>'##al change in location of the marker can be calculated by collecting results from a few markers the degree to which the model is flexibly yielding due to the air load can be calculated there are many different kinds of wind tunnels they are typically classified by the range of speeds that are achieved in the test section as follows lowspeed wind tunnel high speed wind tunnel subsonic and transonic wind tunnel supersonic wind tunnel hypersonic wind tunnel high enthalpy wind tunnelwind tunnels are also classified by the orientation of air flow in the test section with respect to gravity typically they are oriented horizontally as happens during level flight a different class of wind tunnels are oriented vertically so that gravity can be balanced by drag instead of lift and these have become a popular form of recreation for simulating skydiving vertical wind tunnelwind tunnels are also classified based on their main use for those used with land vehicles such as cars and trucks the type of floor aerodynamics is also important these vary from stationary floors through to full moving floors with smaller moving floors and some attempt at boundary level control also being important the main subcategories in the aeronautical wind tunnels are high reynolds number tunnels reynolds number is one of the governing similarity parameters for the simulation of flow in a wind tunnel for mach number less than 03 it is the primary parameter that governs the flow characteristics there are three main ways to simulate high reynolds number since it is not practical to obtain full scale reynolds number by use of a full scale vehicle pressurised tunnels here test gases are pressurised to increase the reynolds number heavy gas tunnels heavier gases like freon and r134a are used as test gases the transonic dynamics tunnel at nasa langley is an example of such a tunnel cryogenic tunnels here test gas is cooled down to increase the reynolds number the european transonic wind tunnel uses this technique highaltitude tunnels these are designed to test the effects of shock waves against various aircraft shapes in near vacuum in 1952 the university of california constructed the first two highaltitude wind tunnels one for testing objects at 50 to 70 miles above the earth and the second for tests at 80 to 200 miles above the earth vstol tunnels vstol tunnels require large cross section area but only small velocities since power varies with the cube of velocity the power required for the operation is also less an example of a vstol tunnel is the nasa langley 14 by 22 ft 43 by 67 m tunnel spin tunnels aircraft have a tendency to spin when they stall these tunnels are used to study that phenomenon automotive wind tunnels fall into two categories'</li><li>'high speed requires at least a 2dimensional treatment when all 3 spatial dimensions and perhaps the time dimension as well are important we often resort to computerized solutions of the governing equations the mach number m is defined as the ratio of the speed of an object or of a flow to the speed of sound for instance in air at room temperature the speed of sound is about 340 ms 1100 fts m can range from 0 to ∞ but this broad range falls naturally into several flow regimes these regimes are subsonic transonic supersonic hypersonic and hypervelocity flow the figure below illustrates the mach number spectrum of these flow regimes these flow regimes are not chosen arbitrarily but rather arise naturally from the strong mathematical background that underlies compressible flow see the cited reference textbooks at very slow flow speeds the speed of sound is so much faster that it is mathematically ignored and the mach number is irrelevant once the speed of the flow approaches the speed of sound however the mach number becomes allimportant and shock waves begin to appear thus the transonic regime is described by a different and much more complex mathematical treatment in the supersonic regime the flow is dominated by wave motion at oblique angles similar to the mach angle above about mach 5 these wave angles grow so small that a different mathematical approach is required defining the hypersonic speed regime finally at speeds comparable to that of planetary atmospheric entry from orbit in the range of several kms the speed of sound is now comparatively so slow that it is once again mathematically ignored in the hypervelocity regime as an object accelerates from subsonic toward supersonic speed in a gas different types of wave phenomena occur to illustrate these changes the next figure shows a stationary point m 0 that emits symmetric sound waves the speed of sound is the same in all directions in a uniform fluid so these waves are simply concentric spheres as the soundgenerating point begins to accelerate the sound waves bunch up in the direction of motion and stretch out in the opposite direction when the point reaches sonic speed m 1 it travels at the same speed as the sound waves it creates therefore an infinite number of these sound waves pile up ahead of the point forming a shock wave upon achieving supersonic flow the particle is moving so fast that it continuously leaves its sound waves behind when this occurs the locus of these waves trailing behind the point creates an angle known as the mach wave angle or mach angle μ μ arcsin a v arcsin 1 m displaystyle mu arcsin leftfrac avrightarcsin leftfrac 1mright where a displaystyle a'</li></ul> |
| 32 | <ul><li>'for producing precision lengths by stacking components which are joined temporarily in a similar fashion'</li><li>'this step does the preforming of green raw bodies of the mould inserts sintering by sintering the preformed green bodies are compressed and hardened in order to do this the green body is heated to a temperature below the melting temperature the sintering process consists of three phases first the volume and the porosity is reduced and secondly the open porosity is reduced in the third phase sinter necks are formed which enhance the materials strength premachining the step of premachining creates the main form of the optical insert it typically contains four process steps these steps are grinding the innerouter diameter grinding the parallelend faces of the insert grindinglapping of the fitting of insert and finally the nearnetshape grinding of the cavity normally the cavity is only premachined to a flat or a bestfit sphere grinding grinding or finishmachining creates the final form and the surface finish of the cavity in the mould insert usually the finish is carried out by grinding a subsequent polishing step is optionally required finish grinding can require several changes of the grinding tool and several truing steps of the tool finishmachining of the mould is an iterative process as long as the machined mould shows deviations from the nominal contour in the measurement step after grinding it has to be reground there is no welldefined border between premachining and fine grinding throughout the grinding process of the cavity the grain size of the tool the feed rate and the cutting depth are reduced whereas machining time increases convex surfaces are easier to manufacture the necessary steps of workpiece preparation are the mould alignment and the mould referencing grinding tool alignment grinding tool referencing and grinding tool truing also have to be done after that polishing can be necessary to remove the anisotropic structure which remains after grinding it can be performed manually or by a cncmachine coating coating is the process step in which a layer is applied on the cavity surface of the optical insert which protects the mould against wear corrosion friction sticking of glass and chemical reactions with glass for coating the surface of moulds by physical vapour deposition pvd metals are evaporated in combination with processgasbased chemicals on the tool surface highly adherent thin coatings are synthesized materials for coatings on optical inserts are platinumbased pvd mostly iridiumalloyed standard diamondlike carbon not yet commercially available sic cvd on sicceramics not yet commercially available have to be postmachined or tialn not yet commercially available to achieve a homogeneous layer thickness the'</li><li>'gag bennet 1974 electricity and modern physics 2nd ed edward arnold uk isbn 0713124598 is grant wr phillips manchester physics 2008 electromagnetism 2nd ed john wiley sons isbn 9780471927129 dj griffiths 2007 introduction to electrodynamics 3rd ed pearson education dorling kindersley isbn 9788177582932 lh greenberg 1978 physics with modern applications holtsaunders international wb saunders and co isbn 0721642470 jb marion wf hornyak 1984 principles of physics holtsaunders international saunders college isbn 4833701952 a beiser 1987 concepts of modern physics 4th ed mcgrawhill international isbn 0071001441 hd young ra freedman 2008 university physics – with modern physics 12th ed addisonwesley pearson international isbn 9780321501301'</li></ul> |
| 26 | <ul><li>'between roughness because due to this tangential component plastic deformation comes with a lower load than when ignoring this component a more realistic description then of the area of each single junction that is created is given by with α displaystyle alpha constant and a tangent force f → i displaystyle vec fi applied to the joint to obtain even more realistic considerations the phenomenon of the third body should also be considered ie the presence of foreign materials such as moisture oxides or lubricants between the two solids in contact a coefficient c is then introduced which is able to correlate the shear strength t of the pure material and that of the third body t t b displaystyle ttb with 0 c 1 by studying the behavior at the limits it will be that for c 0 t 0 and for c 1 it returns to the condition in which the surfaces are directly in contact and there is no presence of a third body keeping in mind what has just been said it is possible to correct the friction coefficient formula as follows in conclusion the case of elastic bodies in interaction with each other is considered similarly to what we have just seen it is possible to define an equation of the type where in this case k depends on the elastic properties of the materials also for the elastic bodies the tangential force depends on the coefficient c seen above and it will be and therefore a fairly exhaustive description of the friction coefficient can be obtained friction measurements the simplest and most immediate method for evaluating the friction coefficient of two surfaces is the use of an inclined plane on which a block of material is made to slide as can be seen in the figure the normal force of the plane is given by m g cos θ displaystyle mgcos theta while the frictional force is equal to m g sin θ displaystyle mgsin theta this allows us to state that the coefficient of friction can be calculated very easily by means of the tangent of the angle in which the block begins to slip in fact we have then from the inclined plane we moved on to more sophisticated systems which allow us to consider all the possible environmental conditions in which the measurement is made such as the crossroller machine or the pin and disk machine today there are digital machines such as the friction tester which allows by means of a software support to insert all the desired variables another widely used process is the ring compression test a flat ring of the material to be studied is plastically deformed by means of a press if the deformation is an expansion in both the inner and the outer circle then there will be low or zero friction coefficients otherwise for a deformation that expands only in'</li><li>'the metallurgical production of the republic of azerbaijan is considered high due to the large deposits of alunite polymetallic ores deposits of iron ore etc the metallurgy industry of azerbaijan encompasses both ferrous and nonferrous branches ferrous metallurgy includes extraction of iron smelting and refining of iron ore rolling and ferroalloys production the ferrous metallurgy production of the country started to meet the demand of oil and gas industry due to pipe production and grew further in order to improve other branches of the industry dashkasan iron ore in 4 deposits dashkesen south dashkasan hamanchay demiroglu in the valley of goshagarchay plays a key role in development of ferrous metallurgy the cities of baku sumgait and dashkesan are major centers of metallurgy in terms of extraction and processing of iron ore the sumgait piperolling plant produces drill pipes casing tubing oil and gas pipes etc bentonite clay deposits in the village of dash salakhly gazakh district is used in steel smelting baku steel company the largest metallurgical enterprise in azerbaijan was opened in 2001 on the initiative of heydar aliyev with two electric arc furnaces and three rolling lines the annual steel production capacity of company increased to 1000000 tons aluminum copper molybdenum cobalt mercury reserves and most importantly electricity for the smelting process has led to the development of nonferrous metallurgy the zeylik mine in daskasan district is the main provider of the alunite for aluminum production the extracted ore here transported through guschualabashli railway to the aluminum plant located in ganja city the obtained aluminum oxide is brought to sumgayit aluminum plant in order produce aluminum metal ganja aluminum plant produces sulfuric acid aluminum oxide and potassium fertilizer through extracted ore from zalik deposit in dashkesen aluminum oxide is also produced in sumgait azergold cjsc created by the presidential decree no 1047 on february 11 2015 implements exploration management and also extraction processing and sale of precious and nonferrous metal ore deposits located within the borders of the country in 2017 the volume of exports of precious metals carried out by this company amounted to 77340 million dollars gold mining began in gedebey in 2009 in 2016 azer gold cjsc began gold mining in the chovdar deposit in 2017 63908 kg of gold was mined which exceeded the 2016 production by 34 times gold production'</li><li>'the material they are most found in these are given in miller indices for simplification purposes cube component 001100 brass component 110112 copper component 112111 s component 123634 the full 3d representation of crystallographic texture is given by the orientation distribution function odf which can be achieved through evaluation of a set of pole figures or diffraction patterns subsequently all pole figures can be derived from the odf the odf is defined as the volume fraction of grains with a certain orientation g displaystyle boldsymbol g odf g 1 v d v g d g displaystyle textodfboldsymbol gfrac 1vfrac dvboldsymbol gdg the orientation g displaystyle boldsymbol g is normally identified using three euler angles the euler angles then describe the transition from the sample ’ s reference frame into the crystallographic reference frame of each individual grain of the polycrystal one thus ends up with a large set of different euler angles the distribution of which is described by the odf the orientation distribution function odf cannot be measured directly by any technique traditionally both xray diffraction and ebsd may collect pole figures different methodologies exist to obtain the odf from the pole figures or data in general they can be classified based on how they represent the odf some represent the odf as a function sum of functions or expand it in a series of harmonic functions others known as discrete methods divide the odf space in cells and focus on determining the value of the odf in each cell in wire and fiber all crystals tend to have nearly identical orientation in the axial direction but nearly random radial orientation the most familiar exceptions to this rule are fiberglass which has no crystal structure and carbon fiber in which the crystalline anisotropy is so great that a goodquality filament will be a distorted single crystal with approximately cylindrical symmetry often compared to a jelly roll singlecrystal fibers are also not uncommon the making of metal sheet often involves compression in one direction and in efficient rolling operations tension in another which can orient crystallites in both axes by a process known as grain flow however cold work destroys much of the crystalline order and the new crystallites that arise with annealing usually have a different texture control of texture is extremely important in the making of silicon steel sheet for transformer cores to reduce magnetic hysteresis and of aluminium cans since deep drawing requires extreme and relatively uniform plasticity texture in ceramics usually arises because the crystallites in a slurry'</li></ul> |
| 15 | <ul><li>'is could effectively be used as a geneediting tool in human 2pn zygotes which could lead potentially pregnancy viable if implanted the scientists used injection of cas9 protein complexed with the relevant sgrnas and homology donors into human embryos the scientists found homologous recombinationmediated alteration in hbb and g6pd the scientists also noted the limitations of their study and called for further researchin august 2017 a group of scientists from oregon published an article in nature journal detailing the successful use of crispr to edit out a mutation responsible for congenital heart disease the study looked at heterozygous mybpc3 mutation in human embryos the study claimed precise crisprcas9 and homologydirected repair response with high accuracy and precision doublestrand breaks at the mutant paternal allele were repaired using the homologous wildtype gene by modifying the cell cycle stage at which the dsb was induced they were able to avoid mosaicism which had been seen in earlier similar studies in cleaving embryos and achieve a large percentage of homozygous embryos carrying the wildtype mybpc3 gene without evidence of unintended mutations the scientists concluded that the technique may be used for the correction of mutations in human embryos the claims of this study were however pushed back on by critics who argued the evidence was overall unpersuasivein june 2018 a group of scientists published and article in nature journal indicating a potential link for edited cells having increased potential turn cancerous the scientists reported that genome editing by crisprcas9 induced dna damage response and the cell cycle stopped the study was conducted in human retinal pigment epithelial cells and the use of crispr led to a selection against cells with a functional p53 pathway the conclusion of the study would suggest that p53 inhibition might increase efficiency of human germline editing and that p53 function would need to be watched when developing crisprcas9 based therapyin november 2018 a group of chinese scientists published research in the journal molecular therapy detailing their use of crisprcas9 technology to correct a single mistaken amino acid successfully in 16 out of 18 attempts in a human embryo the unusual level of precision was achieved by the use of a base editor be system which was constructed by fusing the deaminase to the dcas9 protein the be system efficiently edits the targeted c to t or g to a without the use of a donor and without dbs formation the study focused on the fbn1 mutation that is causative for mar'</li><li>'by the american nurses association which provides rules regulations and guidelines to follow when making a decision that is ethical based these regulations were mainly established to help provide equal healthcare protect the rights safety and privacy of the patient and to hold nurses accountable for their actions and choices genetics can create ethical issues in nursing for a variety of different situations many scenarios questions and debates have been encountered such as what individuals can receive genetic testing or information who owns or controls the information received from the genetic test and how can the owner use that information however the code of ethics does not address genetics or genomics specifically so ethical foundations were also established to help guide genetics into health care the foundations provide a set of guidelines to understand and manage an ethical issue if one should arise and to assist in the translation of genetics into the healthcare environment'</li><li>'than is accurate to the population this is known as the shadow effect the cabrera vole microtus cabrerae is a small endangered rodent that belongs to the microtus genus existing primarily in portugal populations can be difficult to estimate using typical markrecapture methods due to their small size and ability to quickly disperse over large swaths of prairie land with the introduction and reduced cost of using environmental dna in this case feces were able to be used in a relatively low cost experiment to estimate the population size of the cabrera vole in southern portugal in return for sacrificing demographic age sex health information endangered species act of 1973'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.6909 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("udrearobert999/multi-qa-mpnet-base-cos-v1-test")
# Run inference
preds = model("##rch procedure that evaluates the objective function p x displaystyle pmathbf x on a grid of candidate source locations g displaystyle mathcal g to estimate the spatial location of the sound source x s displaystyle textbf xs as the point of the grid that provides the maximum srp modifications of the classical srpphat algorithm have been proposed to reduce the computational cost of the gridsearch step of the algorithm and to increase the robustness of the method in the classical srpphat for each microphone pair and for each point of the grid a unique integer tdoa value is selected to be the acoustic delay corresponding to that grid point this procedure does not guarantee that all tdoas are associated to points on the grid nor that the spatial grid is consistent since some of the points may not correspond to an intersection of hyperboloids this issue becomes more problematic with coarse grids since when the number of points is reduced part of the tdoa information gets lost because most delays are not anymore associated to any point in the grid the modified srpphat collects and uses the tdoa information related to the volume surrounding each spatial point of the search grid by considering a modified objective function where l m 1 m 2 l x displaystyle lm1m2lmathbf x and l m 1 m 2 u x displaystyle lm1m2umathbf x are the lower and upper accumulation limits of gcc delays which depend on the spatial location x displaystyle mathbf x the accumulation limits can be calculated beforehand in an exact way by exploring the boundaries separating the regions corresponding to the points of the grid alternatively they can be selected by considering the spatial gradient of the tdoa ∇ τ m 1 m 2 x ∇ x τ m 1 m 2 x ∇ y τ m 1 m 2 x ∇ z τ m 1 m 2 x t displaystyle nabla tau m1m2mathbf x nabla xtau m1m2mathbf x nabla ytau m1m2mathbf x nabla ztau m1m2mathbf x t where each component γ ∈ x y z displaystyle gamma in leftxyzright of the gradient is for a rectangular grid where neighboring points are separated a distance r displaystyle r the lower and upper accumulation limits are given by where d r 2 min 1 sin θ cos [UNK] 1 sin θ sin [UNK] 1 cos θ displaystyle dr2min leftfrac 1vert sintheta cosphi vert frac 1vert sintheta sinphi vert frac 1vert")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:----|
| Word count | 1 | 370.3098 | 509 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 50 |
| 1 | 50 |
| 2 | 50 |
| 3 | 50 |
| 4 | 50 |
| 5 | 50 |
| 6 | 50 |
| 7 | 50 |
| 8 | 50 |
| 9 | 50 |
| 10 | 50 |
| 11 | 50 |
| 12 | 50 |
| 13 | 50 |
| 14 | 50 |
| 15 | 50 |
| 16 | 50 |
| 17 | 50 |
| 18 | 50 |
| 19 | 50 |
| 20 | 50 |
| 21 | 50 |
| 22 | 50 |
| 23 | 50 |
| 24 | 50 |
| 25 | 50 |
| 26 | 50 |
| 27 | 50 |
| 28 | 50 |
| 29 | 50 |
| 30 | 50 |
| 31 | 50 |
| 32 | 50 |
| 33 | 50 |
| 34 | 50 |
| 35 | 50 |
| 36 | 50 |
| 37 | 50 |
| 38 | 50 |
| 39 | 50 |
| 40 | 50 |
| 41 | 50 |
| 42 | 50 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 4)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 10
- body_learning_rate: (2e-05, 0.01)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- max_length: 512
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:----------:|:--------:|:-------------:|:---------------:|
| 0.0004 | 1 | 0.3114 | - |
| 0.1860 | 500 | 0.0379 | - |
| 0.3720 | 1000 | 0.1131 | - |
| 0.5580 | 1500 | 0.0567 | - |
| **0.7440** | **2000** | **0.0168** | **0.1033** |
| 0.9301 | 2500 | 0.0033 | - |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.1
- PyTorch: 2.2.1+cu121
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION",
"TRANSLATION"
] |
Non_BioNLP
|
AntX-ai/AntX-7B
|
AntX-ai
|
text-generation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"dataset:BAAI/COIG-PC",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,690,597,866,000 | 2023-07-29T03:05:21 | 18 | 2 |
---
datasets:
- BAAI/COIG-PC
language:
- zh
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is an experimental product that can be used to create new LLM bassed on Chinese language.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** yjf9966
- **Model type:** LLaMA with enhanced tokenizer-size-49954
- **Language(s) (NLP):** Chinese/English
- **License:** Apache-2.0
- **Finetuned from model:** [Chinese-LLaMA-Alpaca](https://github.com/ymcui/Chinese-LLaMA-Alpaca)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://huggingface.co/AntX-ai/AntX-7B
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
You can use the raw model for next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions.
It also inherits some of the bias of its dataset model.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import LlamaForCausalLM, LlamaTokenizer
import torch
base_model_name = "AntX-ai/AntX-7B"
load_type = torch.float16
device = None
generation_config = dict(
temperature=0.2,
top_k=40,
top_p=0.9,
do_sample=True,
num_beams=1,
repetition_penalty=1.3,
max_new_tokens=400
)
prompt_input = (
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n\n{instruction}\n\n### Response:\n\n"
)
if torch.cuda.is_available():
device = torch.device(0)
else:
device = torch.device('cpu')
def generate_prompt(instruction, input=None):
if input:
instruction = instruction + '\n' + input
return prompt_input.format_map({'instruction': instruction})
tokenizer = LlamaTokenizer.from_pretrained(base_model_name)
model = LlamaForCausalLM.from_pretrained(
base_model_name,
load_in_8bit=False,
torch_dtype=load_type,
low_cpu_mem_usage=True,
device_map='auto',
)
model_vocab_size = model.get_input_embeddings().weight.size(0)
tokenzier_vocab_size = len(tokenizer)
if model_vocab_size != tokenzier_vocab_size:
model.resize_token_embeddings(tokenzier_vocab_size)
raw_input_text = input("Input:")
input_text = generate_prompt(instruction=raw_input_text)
inputs = tokenizer(input_text, return_tensors="pt")
generation_output = model.generate(
input_ids=inputs["input_ids"].to(device),
attention_mask=inputs['attention_mask'].to(device),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
**generation_config
)
s = generation_output[0]
output = tokenizer.decode(s, skip_special_tokens=True)
response = output.split("### Response:")[1].strip()
print("Response: ", response)
print("\n")
```
## Training Details
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
80% for train dataset and 20% for test dataset
#### Training Hyperparameters
- **Training regime:** fp16 mixed precision, lr=1e-4, lora_rank=8, lora_alpha=32 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
## Evaluation
#### Testing Data
<!-- This should link to a Data Card if possible. -->
20% of the BAAI/COIG-PC dataset.
```
Input:王国维说:“自周之衰,文王、周公势力之瓦解也,国民之智力成熟于内,政治之纷乱乘之于外,上无统一之制度,下迫于社会之要求,于是诸于九流各创其学说。” 他意在说明 A. 分封制的崩溃 B. 商鞅变法的作用 C. 兼并战争的后果 D. 百家争鸣的原因
Response: 本题考查对材料的理解。A错误;B正确;C和D与材料无关。故选BC两项即可
Input:经济基础是指一定社会发展阶段占统治地位的生产关系各个方面的总和。解决了地方经济问题, 也就解 决了地方割据问题, 为此, 宋太祖采取的措施是( ) A . 地方设转运使, 财赋收归中央 B . 派文臣担任各地州县的长官 C . 派文臣管理地方政事 D . 采取分化事权办法, 削弱相权
Response: A: 本题考查对宋太祖治下地方问题的认识。 A : 依据材料可知, 在北宋时期 , 由于地主阶级的发展壮大以及商业、手工业等新兴行业出现 , 这一时期出现的地方割据现象严重威胁着国家的统一与稳定 . 因此 , 为了解决这个问题, 需要加强中央集权 ; 故选 A
Input:go\\n/**\\n * Definition for a binary tree node.\\n * type TreeNode struct {\\n * Val int\\n * Left *TreeNode\\n * Right *TreeNode\\n * }\\n */\\nfunc maxDepth(root *TreeNode) int {\\n\\tif root == nil {\\n\\t\\treturn 0\\n\\t}\\n\\tl, r := maxDepth(root.Left), maxDepth(root.Right)\\n\\treturn 1 + max(l, r)\\n}\\n\\nfunc max(a, b int) int {\\n\\tif a > b {\\n\\t\\treturn a\\n\\t}\\n\\treturn b\\n}\\n
Response: ```python
def max_depth(root): # The function takes in a single parameter `root` and returns its maximum depth value as output.
if not root or len(root.children()) == 0:
return 0
l = max_depth(root.left)
r = max_depth(root.right)
return 1 + (max(l, r))
class TreeNode:
def __init__(self, val=None, left=10, right=0):
self.val = val
self.left = None
self.right = None
```
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
```
@software{AntX-7B,
title={An Enchanced Chinese Language Model based on the Chinese-LLaMA-Alpaca},
url={https://huggingface.co/AntX-ai/AntX-7B},
year={2023}
}
```
|
[
"QUESTION_ANSWERING"
] |
Non_BioNLP
|
IronOne-AI-Labs/long-t5-16k-annual-report-QLoRA-fine-tuned-v1.1
|
IronOne-AI-Labs
| null |
[
"transformers",
"safetensors",
"Summarization",
"S",
"u",
"m",
"a",
"r",
"i",
"z",
"t",
"o",
"n",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | 1,721,999,062,000 | 2024-07-26T13:04:32 | 0 | 0 |
---
library_name: transformers
tags:
- Summarization
- S
- u
- m
- a
- r
- i
- z
- t
- o
- n
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
Thannok1727/Prompt1727
|
Thannok1727
|
summarization
|
[
"adapter-transformers",
"summarization",
"en",
"dataset:HuggingFaceFV/finevideo",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"license:cc-by-nc-2.0",
"region:us"
] | 1,728,849,264,000 | 2024-10-13T19:58:27 | 0 | 0 |
---
base_model:
- meta-llama/Llama-3.2-1B
datasets:
- HuggingFaceFV/finevideo
language:
- en
library_name: adapter-transformers
license: cc-by-nc-2.0
pipeline_tag: summarization
new_version: meta-llama/Llama-3.2-11B-Vision-Instruct
---
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
philschmid/openai-whisper-endpoint
|
philschmid
|
automatic-speech-recognition
|
[
"generic",
"audio",
"automatic-speech-recognition",
"endpoints-template",
"license:mit",
"endpoints_compatible",
"region:us"
] | 1,663,964,864,000 | 2022-09-23T21:26:56 | 0 | 11 |
---
library_name: generic
license: mit
tags:
- audio
- automatic-speech-recognition
- endpoints-template
inference: false
---
# OpenAI [Whisper](https://github.com/openai/whisper) Inference Endpoint example
> Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.
For more information about the model, license and limitations check the original repository at [openai/whisper](https://github.com/openai/whisper).
---
This repository implements a custom `handler` task for `automatic-speech-recognition` for 🤗 Inference Endpoints using OpenAIs new Whisper model. The code for the customized pipeline is in the [pipeline.py](https://huggingface.co/philschmid/openai-whisper-endpoint/blob/main/handler.py).
There is also a [notebook](https://huggingface.co/philschmid/openai-whisper-endpoint/blob/main/create_handler.ipynb) included, on how to create the `handler.py`
### Request
The endpoint expects a binary audio file. Below is a cURL example and a Python example using the `requests` library.
**curl**
```bash
# load audio file
wget https://cdn-media.huggingface.co/speech_samples/sample1.flac
# run request
curl --request POST \
--url https://{ENDPOINT}/ \
--header 'Content-Type: audio/x-flac' \
--header 'Authorization: Bearer {HF_TOKEN}' \
--data-binary '@sample1.flac'
```
**Python**
```python
import json
from typing import List
import requests as r
import base64
import mimetypes
ENDPOINT_URL=""
HF_TOKEN=""
def predict(path_to_audio:str=None):
# read audio file
with open(path_to_audio, "rb") as i:
b = i.read()
# get mimetype
content_type= mimetypes.guess_type(path_to_audio)[0]
headers= {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": content_type
}
response = r.post(ENDPOINT_URL, headers=headers, data=b)
return response.json()
prediction = predict(path_to_audio="sample1.flac")
prediction
```
expected output
```json
{"text": " going along slushy country roads and speaking to damp audiences in draughty school rooms day after day for a fortnight. He'll have to put in an appearance at some place of worship on Sunday morning, and he can come to us immediately afterwards."}
```
|
[
"TRANSLATION"
] |
Non_BioNLP
|
adowu/astral-256k-7b-v2
|
adowu
|
text-generation
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"astral",
"256k",
"long",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,712,722,610,000 | 2024-04-10T04:59:02 | 10 | 0 |
---
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- astral
- 256k
- long
- mistral
---
### ASTRAL-256k-7b-v2
The adowu/astral-256k-7b-v2 is a cutting-edge language model developed on the MistralForCausalLM architecture, designed for advanced causal language modeling tasks. This model stands out for its ability to understand and generate text with remarkable depth and context awareness, making it highly effective for a wide range of natural language processing (NLP) applications.
## Key Features
- Advanced Architecture: Utilizes the MistralForCausalLM framework, enabling efficient and effective text processing and generation.
- Large Model Scale: Equipped with a substantial model size, it captures and processes a vast amount of information, enhancing its understanding and generation capabilities.
- Extended Sequence Handling: Capable of managing exceptionally long sequences, this model excels in tasks requiring extensive contextual information.
## Performance and Efficiency
Optimized for high performance, the model employs techniques to balance computational efficiency with output precision. This optimization ensures it can be deployed effectively across various platforms, including those supporting bfloat16 computations, without significant loss in the quality of generated text.
## Application Potential
The model's sophisticated understanding and text generation capabilities make it ideal for several advanced applications:
- Content Generation: From articles and reports to creative writing, it can produce coherent and contextually rich content.
- Conversational Systems: Powers chatbots and virtual assistants, facilitating deep and meaningful interactions over extended conversations.
- Complex Language Understanding Tasks: Excellently performs in summarization, translation, and other tasks over large documents, showcasing its ability to handle detailed and nuanced language understanding.
- **Developed by:** aww
- **Model type:** Mistral
|
[
"TRANSLATION",
"SUMMARIZATION"
] |
Non_BioNLP
|
srimoyee12/my_awesome_model
|
srimoyee12
|
text-classification
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,680,061,039,000 | 2023-04-03T03:46:19 | 15 | 0 |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: srimoyee12/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# srimoyee12/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [Auditor Review Dataset](https://huggingface.co/datasets/demo-org/auditor_review).
It achieves the following results on the evaluation set:
- Train Loss: 0.1735
- Validation Loss: 0.3834
- Train Accuracy: 0.8524
- Epoch: 3
## Model description
This is a simple classifier model based on DistilBERT. It classifies given data into Negative, Neutral or Positive based on the sentiment.
## Intended uses & limitations
Can be used for text classification.
This is created for illustration purposes and might not have the highest accuracy.
## Training and evaluation data
Default split from the [dataset card](https://huggingface.co/datasets/demo-org/auditor_review)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1210, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.5919 | 0.4004 | 0.8359 | 0 |
| 0.2881 | 0.3590 | 0.8473 | 1 |
| 0.1735 | 0.3834 | 0.8524 | 2 |
### Framework versions
- Transformers 4.27.3
- TensorFlow 2.11.0
- Datasets 2.10.1
- Tokenizers 0.13.2
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
Apurva1205/Translation-Multi-Model-Belarusian-English
|
Apurva1205
| null |
[
"region:us"
] | 1,733,877,201,000 | 2024-12-11T00:39:38 | 0 | 0 |
---
metrics:
- accuracy
---
This repository hosts a comprehensive translation system integrating three different neural network architectures: RNN, LSTM, and GPT-2, designed for Belarusian-to-English translation tasks. The models demonstrate comparative performance and illustrate the strengths and limitations of each approach for sequence-to-sequence translation.

Model Details
1. RNN (Recurrent Neural Network)
Architecture: Basic RNN with embedding and dense layers.
Training Data: Belarusian-to-English sentence pairs.
Performance:
Train Loss: 1.9451 (Epoch 1) to 8.2370 (Epoch 10)
Validation Loss: 0.8365 (Epoch 1) to 0.1952 (Epoch 10)
Accuracy: 95.92%
2. LSTM (Long Short-Term Memory)
Architecture: LSTM-based encoder-decoder with attention.
Training Data: Same dataset as RNN.
Performance:
Train Loss: 1.5918 (Epoch 1) to 0.0003 (Epoch 10)
Validation Loss: 1.2702 (Epoch 1) to 0.5693 (Epoch 10)
Accuracy: 95.92%
3. GPT-2 (Generative Pre-trained Transformer 2)
Architecture: Fine-tuned GPT-2 model for translation tasks.
Training Data: Belarusian-to-English formatted sentences.
Performance:
Training Steps: 50 steps
Loss: 1.3228 (Step 1) to 0.8987 (Step 50)
|
[
"TRANSLATION"
] |
Non_BioNLP
|
gsarti/mt5-small-news-summarization
|
gsarti
|
summarization
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"mt5",
"text2text-generation",
"italian",
"sequence-to-sequence",
"fanpage",
"ilpost",
"summarization",
"it",
"dataset:ARTeLab/fanpage",
"dataset:ARTeLab/ilpost",
"arxiv:2203.03759",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2022-03-09T07:52:27 | 147 | 0 |
---
datasets:
- ARTeLab/fanpage
- ARTeLab/ilpost
language:
- it
license: apache-2.0
metrics:
- rouge
tags:
- italian
- sequence-to-sequence
- fanpage
- ilpost
- summarization
widget:
- text: 'Non lo vuole sposare. E’ quanto emerge all’interno dell’ultima intervista
di Raffaella Fico che, ringraziando Mancini per i buoni consigli elargiti al suo
fidanzato, rimanda l’idea del matrimonio per qualche anno ancora. La soubrette,
che è stata recentemente protagonista di una dedica di Supermario, non ha ancora
intenzione di accasarsi perché è sicura che per mettersi la fede al dito ci sia
ancora tempo. Nonostante il suo Mario sia uno degli sportivi più desiderati al
mondo, l’ex protagonista del Grande Fratello non ha alcuna intenzione di cedere
seriamente alla sua corte. Solo qualche giorno fa, infatti, dopo l’ultima bravata
di Balotelli, Mancini gli aveva consigliato di sposare la sua Raffaella e di mettere
la testa a posto. Chi pensava che sarebbe stato Mario a rispondere, però, si è
sbagliato. A mettere le cose bene in chiaro è la Fico che, intervistata dall’emittente
radiofonica Rtl 102.5, dice: È presto per sposarsi, siamo ancora molto giovani.
È giusto che prima uno si realizzi nel proprio lavoro. E poi successivamente perché
no, ci si può anche pensare. Quando si è giovani capita di fare qualche pazzia,
quindi ci sta. Comunque i tabloid inglesi sono totalmente accaniti sulla sua vita
privata quando poi dovrebbero interessarsi di più di quello che fa sul campo.
Lui non fa le cose con cattiveria, ma quando si è giovani si fanno determinate
cose senza stare a pensare se sono giuste o sbagliate. Mario ha gli obiettivi
puntati addosso: più per la sua vita privata che come giocatore. Per me può anche
andare in uno strip club, se non fa niente di male, con gli amici, però devo dire
che alla fine torna sempre da me, sono la sua preferita.'
- text: 'Valerio è giovanissimo ma già una star. Fuori dall’Ariston ragazzine e meno
ragazzine passano ore anche sotto la pioggia per vederlo. Lui è forte del suo
talento e sicuro. Partecipa in gara tra i “big” di diritto, per essere arrivato
in finalissima nel programma Amici di Maria De Filippi e presenta il brano Per
tutte le volte che scritta per lui da Pierdavide Carone. Valerio Scanu è stato
eliminato. Ma non è detta l''ultima parola: il duetto di questa sera con Alessandra
Amoroso potrebbe risollevarlo e farlo rientrare in gara. Che cosa è successo alla
giuria visto che sei stato eliminato anche se l’esibizione era perfetta? Nn lo
so. Sono andate bene le esibizioni, ero emozionato ma tranquillo. Ero contento
ma ho cantato bene. Non sono passato e stasera ci sarà il ballottaggio… Quali
sono le differenze tra Amici e Sanremo? Sono due cose diverse. Amici ti prepara
a salire sul palco di amici. A Sanremo ci devi arrivare… ho fatto più di sessanta
serate nel tour estivo, poi promozione del secondo disco. Una bella palestra.
Sono cresciuto anche umanamente. Sono riuscito a percepire quello che il pubblico
trasmette. L’umiltà? Prima di tutto. Sennò non sarei qui.'
- text: L’azienda statunitense Broadcom, uno dei più grandi produttori di semiconduttori
al mondo, ha presentato un’offerta per acquisire Qualcomm, altra grande società
degli Stati Uniti conosciuta soprattutto per la sua produzione di microprocessori
Snapdragon (ARM), utilizzati in centinaia di milioni di smartphone in giro per
il mondo. Broadcom ha proposto di acquistare ogni azione di Qualcomm al prezzo
di 70 dollari, per un valore complessivo di circa 105 miliardi di dollari (130
miliardi se si comprendono 25 miliardi di debiti netti) . Se l’operazione dovesse
essere approvata, sarebbe una delle più grandi acquisizioni di sempre nella storia
del settore tecnologico degli Stati Uniti. Broadcom ha perfezionato per mesi la
sua proposta di acquisto e, secondo i media statunitensi, avrebbe già preso contatti
con Qualcomm per trovare un accordo. Secondo gli analisti, Qualcomm potrebbe comunque
opporsi all’acquisizione perché il prezzo offerto è di poco superiore a quello
dell’attuale valore delle azioni dell’azienda. Ci potrebbero essere inoltre complicazioni
sul piano dell’antitrust da valutare, prima di un’eventuale acquisizione.
- text: Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da
quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini
ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire
a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi
sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri
precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito,
contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli
teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore
e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti
di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay,
siano invece disponibili gratuitamente.
co2_eq_emissions:
emissions: 17g
source: Google Cloud Platform Carbon Footprint
training_type: fine-tuning
geographical_location: Eemshaven, Netherlands, Europe
hardware_used: 1 TPU v3-8 VM
thumbnail: https://gsarti.com/publication/it5/featured.png
model-index:
- name: mt5-small-news-summarization
results:
- task:
type: news-summarization
name: News Summarization
dataset:
name: NewsSum-IT
type: newssum-it
metrics:
- type: rouge1
value: 0.32
name: Test Rouge1 IlPost
- type: rouge2
value: 0.154
name: Test Rouge2 IlPost
- type: rougeL
value: 0.26
name: Test RougeL IlPost
- type: bertscore
value: 0.38
name: Test BERTScore IlPost
args:
- model_type: dbmdz/bert-base-italian-xxl-uncased
- lang: it
- num_layers: 10
- rescale_with_baseline: true
- baseline_path: bertscore_baseline_ita.tsv
- type: rouge1
value: 0.326
name: Test Rouge1 Fanpage
- type: rouge2
value: 0.145
name: Test Rouge2 Fanpage
- type: rougeL
value: 0.236
name: Test RougeL Fanpage
- type: bertscore
value: 0.37
name: Test BERTScore Fanpage
args:
- model_type: dbmdz/bert-base-italian-xxl-uncased
- lang: it
- num_layers: 10
- rescale_with_baseline: true
- baseline_path: bertscore_baseline_ita.tsv
---
# mT5 Small for News Summarization ✂️🗞️ 🇮🇹
This repository contains the checkpoint for the [mT5 Small](https://huggingface.co/google/mt5-small) model fine-tuned on news summarization on the [Fanpage](https://huggingface.co/datasets/ARTeLab/fanpage) and [Il Post](https://huggingface.co/datasets/ARTeLab/ilpost) corpora as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
newsum = pipeline("summarization", model='it5/mt5-small-news-summarization')
newsum("Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente.")
>>> [{"generated_text": "ITsART, la Netflix della cultura italiana, parte da maggio. Film, documentari, spettacoli teatrali e musicali disponibili sul nuovo sito a pagamento."}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/mt5-small-news-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-small-news-summarization")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
aguinrodriguezj/finetuning-sentiment-model-3000-samples
|
aguinrodriguezj
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,700,568,056,000 | 2023-11-21T12:11:40 | 7 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- type: accuracy
value: 0.8633333333333333
name: Accuracy
- type: f1
value: 0.8655737704918034
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3392
- Accuracy: 0.8633
- F1: 0.8656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"TEXT_CLASSIFICATION"
] |
TBD
|
utter-project/EuroLLM-1.7B-Instruct
|
utter-project
|
text-generation
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"de",
"es",
"fr",
"it",
"pt",
"pl",
"nl",
"tr",
"sv",
"cs",
"el",
"hu",
"ro",
"fi",
"uk",
"sl",
"sk",
"da",
"lt",
"lv",
"et",
"bg",
"no",
"ca",
"hr",
"ga",
"mt",
"gl",
"zh",
"ru",
"ko",
"ja",
"ar",
"hi",
"arxiv:2409.16235",
"base_model:utter-project/EuroLLM-1.7B",
"base_model:finetune:utter-project/EuroLLM-1.7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,722,936,951,000 | 2024-12-16T12:46:04 | 12,433 | 70 |
---
base_model:
- utter-project/EuroLLM-1.7B
language:
- en
- de
- es
- fr
- it
- pt
- pl
- nl
- tr
- sv
- cs
- el
- hu
- ro
- fi
- uk
- sl
- sk
- da
- lt
- lv
- et
- bg
- 'no'
- ca
- hr
- ga
- mt
- gl
- zh
- ru
- ko
- ja
- ar
- hi
library_name: transformers
license: apache-2.0
---
## *Model updated on September 24*
# Model Card for EuroLLM-1.7B-Instruct
This is the model card for the first instruction tuned model of the EuroLLM series: EuroLLM-1.7B-Instruct. You can also check the pre-trained version: [EuroLLM-1.7B](https://huggingface.co/utter-project/EuroLLM-1.7B).
- **Developed by:** Unbabel, Instituto Superior Técnico, Instituto de Telecomunicações, University of Edinburgh, Aveni, University of Paris-Saclay, University of Amsterdam, Naver Labs, Sorbonne Université.
- **Funded by:** European Union.
- **Model type:** A 1.7B parameter instruction tuned multilingual transfomer LLM.
- **Language(s) (NLP):** Bulgarian, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese, Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish, Swedish, Arabic, Catalan, Chinese, Galician, Hindi, Japanese, Korean, Norwegian, Russian, Turkish, and Ukrainian.
- **License:** Apache License 2.0.
## Model Details
The EuroLLM project has the goal of creating a suite of LLMs capable of understanding and generating text in all European Union languages as well as some additional relevant languages.
EuroLLM-1.7B is a 1.7B parameter model trained on 4 trillion tokens divided across the considered languages and several data sources: Web data, parallel data (en-xx and xx-en), and high-quality datasets.
EuroLLM-1.7B-Instruct was further instruction tuned on EuroBlocks, an instruction tuning dataset with focus on general instruction-following and machine translation.
### Model Description
EuroLLM uses a standard, dense Transformer architecture:
- We use grouped query attention (GQA) with 8 key-value heads, since it has been shown to increase speed at inference time while maintaining downstream performance.
- We perform pre-layer normalization, since it improves the training stability, and use the RMSNorm, which is faster.
- We use the SwiGLU activation function, since it has been shown to lead to good results on downstream tasks.
- We use rotary positional embeddings (RoPE) in every layer, since these have been shown to lead to good performances while allowing the extension of the context length.
For pre-training, we use 256 Nvidia H100 GPUs of the Marenostrum 5 supercomputer, training the model with a constant batch size of 3,072 sequences, which corresponds to approximately 12 million tokens, using the Adam optimizer, and BF16 precision.
Here is a summary of the model hyper-parameters:
| | |
|--------------------------------------|----------------------|
| Sequence Length | 4,096 |
| Number of Layers | 24 |
| Embedding Size | 2,048 |
| FFN Hidden Size | 5,632 |
| Number of Heads | 16 |
| Number of KV Heads (GQA) | 8 |
| Activation Function | SwiGLU |
| Position Encodings | RoPE (\Theta=10,000) |
| Layer Norm | RMSNorm |
| Tied Embeddings | No |
| Embedding Parameters | 0.262B |
| LM Head Parameters | 0.262B |
| Non-embedding Parameters | 1.133B |
| Total Parameters | 1.657B |
## Run the model
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "utter-project/EuroLLM-1.7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = '<|im_start|>system\n<|im_end|>\n<|im_start|>user\nTranslate the following English source text to Portuguese:\nEnglish: I am a language model for european languages. \nPortuguese: <|im_end|>\n<|im_start|>assistant\n'
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
## Results
### Machine Translation
We evaluate EuroLLM-1.7B-Instruct on several machine translation benchmarks: FLORES-200, WMT-23, and WMT-24 comparing it with [Gemma-2B](https://huggingface.co/google/gemma-2b) and [Gemma-7B](https://huggingface.co/google/gemma-7b) (also instruction tuned on EuroBlocks).
The results show that EuroLLM-1.7B is substantially better than Gemma-2B in Machine Translation and competitive with Gemma-7B.
#### Flores-200
| Model | AVG | AVG en-xx | AVG xx-en | en-ar | en-bg | en-ca | en-cs | en-da | en-de | en-el | en-es-latam | en-et | en-fi | en-fr | en-ga | en-gl | en-hi | en-hr | en-hu | en-it | en-ja | en-ko | en-lt | en-lv | en-mt | en-nl | en-no | en-pl | en-pt-br | en-ro | en-ru | en-sk | en-sl | en-sv | en-tr | en-uk | en-zh-cn | ar-en | bg-en | ca-en | cs-en | da-en | de-en | el-en | es-latam-en | et-en | fi-en | fr-en | ga-en | gl-en | hi-en | hr-en | hu-en | it-en | ja-en | ko-en | lt-en | lv-en | mt-en | nl-en | no-en | pl-en | pt-br-en | ro-en | ru-en | sk-en | sl-en | sv-en | tr-en | uk-en | zh-cn-en |
|--------------------------------|------|-----------|-----------|-------|-------|-------|-------|-------|-------|-------|--------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|----------|-------|-------|-------|-------|-------|-------|-------|----------|-------|-------|-------|-------|-------|-------|-------|--------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|----------|-------|-------|-------|-------|-------|-------|-------|----------|
| EuroLLM-1.7B-Instruct |86.89 | 86.53 | 87.25 | 85.17 | 89.42 | 84.72 | 89.13 | 89.47 | 86.90 | 87.60 | 86.29 | 88.95 | 89.40 | 87.69 | 74.89 | 86.41 | 76.92 | 84.79 | 86.78 | 88.17 | 89.76 | 87.70 | 87.27 | 87.62 | 67.84 | 87.10 | 90.00 | 88.18 | 89.29 | 89.49 | 88.32 | 88.18 | 86.85 | 90.00 | 87.31 | 87.89 | 86.60 | 86.34 | 87.45 | 87.57 | 87.95 | 89.72 | 88.80 | 87.00 | 86.77 | 88.34 | 89.09 | 88.95 | 82.69 | 87.80 | 88.37 | 86.71 | 87.20 | 87.81 | 86.79 | 86.79 | 85.62 | 86.48 | 81.10 | 86.97 | 90.25 | 85.75 | 89.20 | 88.88 | 86.00 | 87.38 | 86.76 | 89.61 | 87.94 |
| Gemma-2B-EuroBlocks | 81.59 | 78.97 | 84.21 | 76.68 | 82.73 | 83.14 | 81.63 | 84.63 | 83.15 | 79.42 | 84.05 | 72.58 | 79.73 | 84.97 | 40.50 | 82.13 | 67.79 | 80.53 | 78.36 | 84.90 | 87.43 | 82.98 | 72.29 | 68.68 | 58.55 | 83.13 | 86.15 | 82.78 | 86.79 | 83.14 | 84.61 | 78.18 | 75.37 | 80.89 | 78.38 | 84.38 | 84.35 | 83.88 | 85.77 | 86.85 | 86.31 | 88.24 | 88.12 | 84.79 | 84.90 | 82.51 | 86.32 | 88.29 | 54.78 | 86.53 | 85.83 | 85.41 | 85.18 | 86.77 | 85.78 | 84.99 | 81.65 | 81.78 | 67.27 | 85.92 | 89.07 | 84.14 | 88.07 | 87.17 | 85.23 | 85.09 | 83.95 | 87.57 | 84.77 |
| Gemma-7B-EuroBlocks |85.27 | 83.90 | 86.64 | 86.38 | 87.87 | 85.74 | 84.25 | 85.69 | 81.49 | 85.52 | 86.93 | 62.83 | 84.96 | 75.34 | 84.93 | 83.91 | 86.92 | 88.19 | 86.11 | 81.73 | 80.55 | 66.85 | 85.31 | 89.36 | 85.87 | 88.62 | 88.06 | 86.67 | 84.79 | 82.71 | 86.45 | 85.19 | 86.67 | 85.77 | 86.36 | 87.21 | 88.09 | 87.17 | 89.40 | 88.26 | 86.74 | 86.73 | 87.25 | 88.87 | 88.81 | 72.45 | 87.62 | 87.86 | 87.08 | 87.01 | 87.58 | 86.92 | 86.70 | 85.10 | 85.74 | 77.81 | 86.83 | 90.40 | 85.41 | 89.04 | 88.77 | 86.13 | 86.67 | 86.32 | 89.27 | 87.92 |
#### WMT-23
| Model | AVG | AVG en-xx | AVG xx-en | AVG xx-xx | en-de | en-cs | en-uk | en-ru | en-zh-cn | de-en | uk-en | ru-en | zh-cn-en | cs-uk |
|--------------------------------|------|-----------|-----------|-----------|-------|-------|-------|-------|----------|-------|-------|-------|----------|-------|
| EuroLLM-1.7B-Instruct | 82.91 | 83.20 | 81.77 | 86.82 | 81.56 | 85.23 | 81.30 | 82.47 | 83.61 | 85.03 | 84.06 | 85.25 | 81.31 | 78.83 | 79.42 | 86.82 |
| Gemma-2B-EuroBlocks | 79.96 | 79.01 | 80.86 | 81.15 | 76.82 | 76.05 | 77.92 | 78.98 | 81.58 | 82.73 | 82.71 | 83.99 | 80.35 | 78.27 | 78.99 | 81.15 |
| Gemma-7B-EuroBlocks | 82.76 | 82.26 | 82.70 | 85.98 | 81.37 | 82.42 | 81.54 | 82.18 | 82.90 | 83.17 | 84.29 | 85.70 | 82.46 | 79.73 | 81.33 | 85.98 |
#### WMT-24
| Model | AVG | AVG en-xx | AVG xx-xx | en-de | en-es-latam | en-cs | en-ru | en-uk | en-ja | en-zh-cn | en-hi | cs-uk | ja-zh-cn |
|---------|------|------|-------|----|---|-------|-------|--------|--------|-------|-------|-------|-----|
| EuroLLM-1.7B-Instruct|79.32 | 79.32 | 79.34 | 79.42 | 80.67 | 80.55 | 78.65 | 80.12 | 82.96 | 80.60 | 71.59 | 83.48 | 75.20 |
|Gemma-2B-EuroBlocks| 74.72 | 74.41 | 75.97 | 74.93 | 78.81 | 70.54 | 74.90 | 75.84 | 79.48 | 78.06 | 62.70 | 79.87 | 72.07 |
|Gemma-7B-EuroBlocks| 78.67 | 78.34 | 80.00 | 78.88 | 80.47 | 78.55 | 78.55 | 80.12 | 80.55 | 78.90 | 70.71 | 84.33 | 75.66 |
### General Benchmarks
We also compare EuroLLM-1.7B with [TinyLlama-v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) and [Gemma-2B](https://huggingface.co/google/gemma-2b) on 3 general benchmarks: Arc Challenge and Hellaswag.
For the non-english languages we use the [Okapi](https://aclanthology.org/2023.emnlp-demo.28.pdf) datasets.
Results show that EuroLLM-1.7B is superior to TinyLlama-v1.1 and similar to Gemma-2B on Hellaswag but worse on Arc Challenge. This can be due to the lower number of parameters of EuroLLM-1.7B (1.133B non-embedding parameters against 1.981B).
#### Arc Challenge
| Model | Average | English | German | Spanish | French | Italian | Portuguese | Chinese | Russian | Dutch | Arabic | Swedish | Hindi | Hungarian | Romanian | Ukrainian | Danish | Catalan |
|--------------------|---------|---------|--------|---------|--------|---------|------------|---------|---------|-------|--------|---------|--------|-----------|----------|-----------|--------|---------|
| EuroLLM-1.7B | 0.3496 | 0.4061 | 0.3464 | 0.3684 | 0.3627 | 0.3738 | 0.3855 | 0.3521 | 0.3208 | 0.3507 | 0.3045 | 0.3605 | 0.2928 | 0.3271 | 0.3488 | 0.3516 | 0.3513 | 0.3396 |
| TinyLlama-v1.1 | 0.2650 | 0.3712 | 0.2524 | 0.2795 | 0.2883 | 0.2652 | 0.2906 | 0.2410 | 0.2669 | 0.2404 | 0.2310 | 0.2687 | 0.2354 | 0.2449 | 0.2476 | 0.2524 | 0.2494 | 0.2796 |
| Gemma-2B | 0.3617 | 0.4846 | 0.3755 | 0.3940 | 0.4080 | 0.3687 | 0.3872 | 0.3726 | 0.3456 | 0.3328 | 0.3122 | 0.3519 | 0.2851 | 0.3039 | 0.3590 | 0.3601 | 0.3565 | 0.3516 |
#### Hellaswag
| Model | Average | English | German | Spanish | French | Italian | Portuguese | Russian | Dutch | Arabic | Swedish | Hindi | Hungarian | Romanian | Ukrainian | Danish | Catalan |
|--------------------|---------|---------|--------|---------|--------|---------|------------|---------|--------|--------|---------|--------|-----------|----------|-----------|--------|---------|
| EuroLLM-1.7B | 0.4744 | 0.4760 | 0.6057 | 0.4793 | 0.5337 | 0.5298 | 0.5085 | 0.5224 | 0.4654 | 0.4949 | 0.4104 | 0.4800 | 0.3655 | 0.4097 | 0.4606 | 0.436 | 0.4702 | 0.4445 |
| TinyLlama-v1.1 |0.3674 | 0.6248 | 0.3650 | 0.4137 | 0.4010 | 0.3780 | 0.3892 | 0.3494 | 0.3588 | 0.2880 | 0.3561 | 0.2841 | 0.3073 | 0.3267 | 0.3349 | 0.3408 | 0.3613 |
| Gemma-2B |0.4666 | 0.7165 | 0.4756 | 0.5414 | 0.5180 | 0.4841 | 0.5081 | 0.4664 | 0.4655 | 0.3868 | 0.4383 | 0.3413 | 0.3710 | 0.4316 | 0.4291 | 0.4471 | 0.4448 |
## Bias, Risks, and Limitations
EuroLLM-1.7B-Instruct has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
## Paper
Paper: [EuroLLM: Multilingual Language Models for Europe](https://huggingface.co/papers/2409.16235)
|
[
"TRANSLATION"
] |
Non_BioNLP
|
poltextlab/xlm-roberta-large-english-party-cap-v3
|
poltextlab
|
text-classification
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"multilingual",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,699,012,305,000 | 2025-02-26T16:06:19 | 0 | 0 |
---
language:
- multilingual
license: mit
metrics:
- accuracy
- f1-score
tags:
- zero-shot-classification
- text-classification
- pytorch
extra_gated_prompt: 'Our models are intended for academic use only. If you are not
affiliated with an academic institution, please provide a rationale for using our
models. Please allow us a few business days to manually review subscriptions.
If you use our models for your work or research, please cite this paper: Sebők,
M., Máté, Á., Ring, O., Kovács, V., & Lehoczki, R. (2024). Leveraging Open Large
Language Models for Multilingual Policy Topic Classification: The Babel Machine
Approach. Social Science Computer Review, 0(0). https://doi.org/10.1177/08944393241259434'
extra_gated_fields:
Name: text
Country: country
Institution: text
Institution Email: text
Please specify your academic use case: text
---
# xlm-roberta-large-english-party-cap-v3
## Model description
An `xlm-roberta-large` model finetuned on multilingual training data containing texts of the `party` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
We follow the master codebook of the Comparative Agendas Project, and all of our models use the same major topic codes.
## How to use the model
```python
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large")
pipe = pipeline(
model="poltextlab/xlm-roberta-large-english-party-cap-v3",
task="text-classification",
tokenizer=tokenizer,
use_fast=False,
token="<your_hf_read_only_token>"
)
text = "We will place an immediate 6-month halt on the finance driven closure of beds and wards, and set up an independent audit of needs and facilities."
pipe(text)
```
The translation table from the model results to CAP codes is the following:
```python
CAP_NUM_DICT = {
0: 1,
1: 2,
2: 3,
3: 4,
4: 5,
5: 6,
6: 7,
7: 8,
8: 9,
9: 10,
10: 12,
11: 13,
12: 14,
13: 15,
14: 16,
15: 17,
16: 18,
17: 19,
18: 20,
19: 21,
20: 23,
21: 999,
}
```
We have included a 999 label because our models are fine-tuned on training data containing the label 'None' in addition to the 21 CAP major policy topic codes, indicating that the given text contains no relevant policy content. We use the label 999 for these cases.
### Gated access
Due to the gated access, you must pass the `token` parameter when loading the model. In earlier versions of the Transformers package, you may need to use the `use_auth_token` parameter instead.
## Model performance
The model was evaluated on a test set of 13879 examples (10% of the available data).<br>
Model accuracy is **0.73**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.69 | 0.73 | 0.71 | 1142 |
| 1 | 0.68 | 0.7 | 0.69 | 705 |
| 2 | 0.79 | 0.85 | 0.82 | 865 |
| 3 | 0.79 | 0.77 | 0.78 | 362 |
| 4 | 0.72 | 0.64 | 0.68 | 628 |
| 5 | 0.87 | 0.83 | 0.85 | 936 |
| 6 | 0.68 | 0.71 | 0.7 | 430 |
| 7 | 0.88 | 0.8 | 0.84 | 360 |
| 8 | 0.72 | 0.75 | 0.74 | 198 |
| 9 | 0.85 | 0.79 | 0.82 | 327 |
| 10 | 0.8 | 0.75 | 0.77 | 903 |
| 11 | 0.61 | 0.68 | 0.64 | 752 |
| 12 | 0.66 | 0.79 | 0.72 | 531 |
| 13 | 0.65 | 0.61 | 0.63 | 406 |
| 14 | 0.83 | 0.75 | 0.79 | 964 |
| 15 | 0.71 | 0.74 | 0.73 | 234 |
| 16 | 0.71 | 0.67 | 0.69 | 253 |
| 17 | 0.77 | 0.83 | 0.8 | 1637 |
| 18 | 0.71 | 0.59 | 0.65 | 910 |
| 19 | 0.73 | 0.74 | 0.73 | 366 |
| 20 | 0.76 | 0.61 | 0.68 | 77 |
| 21 | 0.59 | 0.6 | 0.59 | 893 |
| macro avg | 0.74 | 0.72 | 0.73 | 13879 |
| weighted avg | 0.74 | 0.73 | 0.73 | 13879 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
|
[
"TRANSLATION"
] |
Non_BioNLP
|
jzhong22/marian-finetuned-kde4-en-to-fr
|
jzhong22
|
translation
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,733,277,397,000 | 2024-12-04T04:14:18 | 5 | 0 |
---
base_model: Helsinki-NLP/opus-mt-en-fr
datasets:
- kde4
library_name: transformers
license: apache-2.0
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"TRANSLATION"
] |
Non_BioNLP
|
rishabhjain16/whisper-medium
|
rishabhjain16
|
automatic-speech-recognition
|
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"arxiv:2212.04356",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | 1,707,754,020,000 | 2024-02-12T16:07:01 | 12 | 0 |
---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- false
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
license: apache-2.0
pipeline_tag: automatic-speech-recognition
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: whisper-medium
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- type: wer
value: 2.9
name: Test WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- type: wer
value: 5.9
name: Test WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: hi
split: test
args:
language: hi
metrics:
- type: wer
value: 53.87
name: Test WER
---
# Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
**Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
copied and pasted from the original model card.
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
The models were trained on either English-only data or multilingual data. The English-only models were trained
on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
For speech translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes.
The smallest four are trained on either English-only or multilingual data.
The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
# Usage
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens
are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order:
1. The transcription always starts with the `<|startoftranscript|>` token
2. The second token is the language token (e.g. `<|en|>` for English)
3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation
4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction
Thus, a typical sequence of context tokens might look as follows:
```
<|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|>
```
Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps.
These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at
each position. This allows one to control the output language and task for the Whisper model. If they are un-forced,
the Whisper model will automatically predict the output langauge and task itself.
The context tokens can be set accordingly:
```python
model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe")
```
Which forces the model to predict in English under the task of speech recognition.
## Transcription
### English to English
In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language
(English) and task (transcribe).
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium")
>>> model.config.forced_decoder_ids = None
>>> # load dummy dataset and read audio files
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
```
The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
### French to French
The following example demonstrates French to French transcription by setting the decoder ids appropriately.
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids)
['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Un vrai travail intéressant va enfin être mené sur ce sujet.']
```
## Translation
Setting the task to "translate" forces the Whisper model to perform speech translation.
### French to English
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import Audio, load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium")
>>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate")
>>> # load streaming dataset and read first audio sample
>>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
>>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
>>> input_speech = next(iter(ds))["audio"]
>>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' A very interesting work, we will finally be given on this subject.']
```
## Evaluation
This code snippet shows how to evaluate Whisper Medium on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
```python
>>> from datasets import load_dataset
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
>>> import torch
>>> from evaluate import load
>>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-medium")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium").to("cuda")
>>> def map_to_pred(batch):
>>> audio = batch["audio"]
>>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
>>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
>>>
>>> with torch.no_grad():
>>> predicted_ids = model.generate(input_features.to("cuda"))[0]
>>> transcription = processor.decode(predicted_ids)
>>> batch["prediction"] = processor.tokenizer._normalize(transcription)
>>> return batch
>>> result = librispeech_test_clean.map(map_to_pred)
>>> wer = load("wer")
>>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
2.900409225488902
```
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="openai/whisper-medium",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy(), batch_size=8)["text"]
" Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
[{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
'timestamp': (0.0, 5.44)}]
```
Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
|
[
"TRANSLATION"
] |
Non_BioNLP
|
gpustack/bce-embedding-base_v1-GGUF
|
gpustack
|
feature-extraction
|
[
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"transformers",
"en",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,730,389,074,000 | 2024-11-01T03:02:40 | 645 | 0 |
---
language:
- en
- zh
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# bce-embedding-base_v1-GGUF
**Model creator**: [maidalun1020](https://huggingface.co/maidalun1020)<br/>
**Original model**: [maidalun1020/bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1)<br/>
**GGUF quantization**: based on llama.cpp release [61408e7f](https://github.com/ggerganov/llama.cpp/commit/61408e7fad082dc44a11c8a9f1398da4837aad44)
---
<!--
* @Description:
* @Author: shenlei
* @Date: 2023-12-19 10:31:41
* @LastEditTime: 2024-01-09 23:52:00
* @LastEditors: shenlei
-->
<h1 align="center">BCEmbedding: Bilingual and Crosslingual Embedding for RAG</h1>
<p align="center">
<a href="https://github.com/netease-youdao/BCEmbedding/blob/master/LICENSE">
<img src="https://img.shields.io/badge/license-Apache--2.0-yellow">
</a>
<a href="https://twitter.com/YDopensource">
<img src="https://img.shields.io/badge/follow-%40YDOpenSource-1DA1F2?logo=twitter&style={style}">
</a>
</p>
最新、最详细的bce-embedding-base_v1相关信息,请移步(The latest "Updates" should be checked in):
<p align="left">
<a href="https://github.com/netease-youdao/BCEmbedding">GitHub</a>
</p>
## 主要特点(Key Features):
- 中英双语,以及中英跨语种能力(Bilingual and Crosslingual capability in English and Chinese);
- RAG优化,适配更多真实业务场景(RAG adaptation for more domains, including Education, Law, Finance, Medical, Literature, FAQ, Textbook, Wikipedia, etc.);
- 方便集成进langchain和llamaindex(Easy integrations for langchain and llamaindex in <a href="https://github.com/netease-youdao/BCEmbedding">BCEmbedding</a>)。
- `EmbeddingModel`不需要“精心设计”instruction,尽可能召回有用片段。 (No need for "instruction")
- **最佳实践(Best practice)** :embedding召回top50-100片段,reranker对这50-100片段精排,最后取top5-10片段。(1. Get top 50-100 passages with [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1) for "`recall`"; 2. Rerank passages with [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1) and get top 5-10 for "`precision`" finally. )
## News:
- `BCEmbedding`技术博客( **Technical Blog** ): [为RAG而生-BCEmbedding技术报告](https://zhuanlan.zhihu.com/p/681370855)
- Related link for **RerankerModel** : [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1)
## Third-party Examples:
- RAG applications: [QAnything](https://github.com/netease-youdao/qanything), [HuixiangDou](https://github.com/InternLM/HuixiangDou), [ChatPDF](https://github.com/shibing624/ChatPDF).
- Efficient inference framework: [ChatLLM.cpp](https://github.com/foldl/chatllm.cpp), [Xinference](https://github.com/xorbitsai/inference), [mindnlp (Huawei GPU, 华为GPU)](https://github.com/mindspore-lab/mindnlp/tree/master/llm/inference/bce).


-----------------------------------------
<details open="open">
<summary>Click to Open Contents</summary>
- <a href="#-bilingual-and-crosslingual-superiority" target="_Self">🌐 Bilingual and Crosslingual Superiority</a>
- <a href="#-key-features" target="_Self">💡 Key Features</a>
- <a href="#-latest-updates" target="_Self">🚀 Latest Updates</a>
- <a href="#-model-list" target="_Self">🍎 Model List</a>
- <a href="#-manual" target="_Self">📖 Manual</a>
- <a href="#installation" target="_Self">Installation</a>
- <a href="#quick-start" target="_Self">Quick Start (`transformers`, `sentence-transformers`)</a>
- <a href="#integrations-for-rag-frameworks" target="_Self">Integrations for RAG Frameworks (`langchain`, `llama_index`)</a>
- <a href="#%EF%B8%8F-evaluation" target="_Self">⚙️ Evaluation</a>
- <a href="#evaluate-semantic-representation-by-mteb" target="_Self">Evaluate Semantic Representation by MTEB</a>
- <a href="#evaluate-rag-by-llamaindex" target="_Self">Evaluate RAG by LlamaIndex</a>
- <a href="#-leaderboard" target="_Self">📈 Leaderboard</a>
- <a href="#semantic-representation-evaluations-in-mteb" target="_Self">Semantic Representation Evaluations in MTEB</a>
- <a href="#rag-evaluations-in-llamaindex" target="_Self">RAG Evaluations in LlamaIndex</a>
- <a href="#-youdaos-bcembedding-api" target="_Self">🛠 Youdao's BCEmbedding API</a>
- <a href="#-wechat-group" target="_Self">🧲 WeChat Group</a>
- <a href="#%EF%B8%8F-citation" target="_Self">✏️ Citation</a>
- <a href="#-license" target="_Self">🔐 License</a>
- <a href="#-related-links" target="_Self">🔗 Related Links</a>
</details>
<br>
**B**ilingual and **C**rosslingual **Embedding** (`BCEmbedding`), developed by NetEase Youdao, encompasses `EmbeddingModel` and `RerankerModel`. The `EmbeddingModel` specializes in generating semantic vectors, playing a crucial role in semantic search and question-answering, and the `RerankerModel` excels at refining search results and ranking tasks.
`BCEmbedding` serves as the cornerstone of Youdao's Retrieval Augmented Generation (RAG) implmentation, notably [QAnything](http://qanything.ai) [[github](https://github.com/netease-youdao/qanything)], an open-source implementation widely integrated in various Youdao products like [Youdao Speed Reading](https://read.youdao.com/#/home) and [Youdao Translation](https://fanyi.youdao.com/download-Mac?keyfrom=fanyiweb_navigation).
Distinguished for its bilingual and crosslingual proficiency, `BCEmbedding` excels in bridging Chinese and English linguistic gaps, which achieves
- **A high performence on <a href="#semantic-representation-evaluations-in-mteb">Semantic Representation Evaluations in MTEB</a>**;
- **A new benchmark in the realm of <a href="#rag-evaluations-in-llamaindex">RAG Evaluations in LlamaIndex</a>**.
`BCEmbedding`是由网易有道开发的双语和跨语种语义表征算法模型库,其中包含`EmbeddingModel`和`RerankerModel`两类基础模型。`EmbeddingModel`专门用于生成语义向量,在语义搜索和问答中起着关键作用,而`RerankerModel`擅长优化语义搜索结果和语义相关顺序精排。
`BCEmbedding`作为有道的检索增强生成式应用(RAG)的基石,特别是在[QAnything](http://qanything.ai) [[github](https://github.com/netease-youdao/qanything)]中发挥着重要作用。QAnything作为一个网易有道开源项目,在有道许多产品中有很好的应用实践,比如[有道速读](https://read.youdao.com/#/home)和[有道翻译](https://fanyi.youdao.com/download-Mac?keyfrom=fanyiweb_navigation)
`BCEmbedding`以其出色的双语和跨语种能力而著称,在语义检索中消除中英语言之间的差异,从而实现:
- **强大的双语和跨语种语义表征能力【<a href="#semantic-representation-evaluations-in-mteb">基于MTEB的语义表征评测指标</a>】。**
- **基于LlamaIndex的RAG评测,表现SOTA【<a href="#rag-evaluations-in-llamaindex">基于LlamaIndex的RAG评测指标</a>】。**
## 🌐 Bilingual and Crosslingual Superiority
Existing embedding models often encounter performance challenges in bilingual and crosslingual scenarios, particularly in Chinese, English and their crosslingual tasks. `BCEmbedding`, leveraging the strength of Youdao's translation engine, excels in delivering superior performance across monolingual, bilingual, and crosslingual settings.
`EmbeddingModel` supports ***Chinese (ch) and English (en)*** (more languages support will come soon), while `RerankerModel` supports ***Chinese (ch), English (en), Japanese (ja) and Korean (ko)***.
现有的单个语义表征模型在双语和跨语种场景中常常表现不佳,特别是在中文、英文及其跨语种任务中。`BCEmbedding`充分利用有道翻译引擎的优势,实现只需一个模型就可以在单语、双语和跨语种场景中表现出卓越的性能。
`EmbeddingModel`支持***中文和英文***(之后会支持更多语种);`RerankerModel`支持***中文,英文,日文和韩文***。
## 💡 Key Features
- **Bilingual and Crosslingual Proficiency**: Powered by Youdao's translation engine, excelling in Chinese, English and their crosslingual retrieval task, with upcoming support for additional languages.
- **RAG-Optimized**: Tailored for diverse RAG tasks including **translation, summarization, and question answering**, ensuring accurate **query understanding**. See <a href=#rag-evaluations-in-llamaindex>RAG Evaluations in LlamaIndex</a>.
- **Efficient and Precise Retrieval**: Dual-encoder for efficient retrieval of `EmbeddingModel` in first stage, and cross-encoder of `RerankerModel` for enhanced precision and deeper semantic analysis in second stage.
- **Broad Domain Adaptability**: Trained on diverse datasets for superior performance across various fields.
- **User-Friendly Design**: Instruction-free, versatile use for multiple tasks without specifying query instruction for each task.
- **Meaningful Reranking Scores**: `RerankerModel` provides relevant scores to improve result quality and optimize large language model performance.
- **Proven in Production**: Successfully implemented and validated in Youdao's products.
- **双语和跨语种能力**:基于有道翻译引擎的强大能力,我们的`BCEmbedding`具备强大的中英双语和跨语种语义表征能力。
- **RAG适配**:面向RAG做了针对性优化,可以适配大多数相关任务,比如**翻译,摘要,问答**等。此外,针对**问题理解**(query understanding)也做了针对优化,详见 <a href="#rag-evaluations-in-llamaindex">基于LlamaIndex的RAG评测指标</a>。
- **高效且精确的语义检索**:`EmbeddingModel`采用双编码器,可以在第一阶段实现高效的语义检索。`RerankerModel`采用交叉编码器,可以在第二阶段实现更高精度的语义顺序精排。
- **更好的领域泛化性**:为了在更多场景实现更好的效果,我们收集了多种多样的领域数据。
- **用户友好**:语义检索时不需要特殊指令前缀。也就是,你不需要为各种任务绞尽脑汁设计指令前缀。
- **有意义的重排序分数**:`RerankerModel`可以提供有意义的语义相关性分数(不仅仅是排序),可以用于过滤无意义文本片段,提高大模型生成效果。
- **产品化检验**:`BCEmbedding`已经被有道众多真实产品检验。
## 🚀 Latest Updates
- ***2024-01-03***: **Model Releases** - [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1) and [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1) are available.
- ***2024-01-03***: **Eval Datasets** [[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)] - Evaluate the performence of RAG, using [LlamaIndex](https://github.com/run-llama/llama_index).
- ***2024-01-03***: **Eval Datasets** [[Details](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)] - Evaluate the performence of crosslingual semantic representation, using [MTEB](https://github.com/embeddings-benchmark/mteb).
- ***2024-01-03***: **模型发布** - [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1)和[bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1)已发布.
- ***2024-01-03***: **RAG评测数据** [[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)] - 基于[LlamaIndex](https://github.com/run-llama/llama_index)的RAG评测数据已发布。
- ***2024-01-03***: **跨语种语义表征评测数据** [[详情](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)] - 基于[MTEB](https://github.com/embeddings-benchmark/mteb)的跨语种评测数据已发布.
## 🍎 Model List
| Model Name | Model Type | Languages | Parameters | Weights |
|:-------------------------------|:--------:|:--------:|:--------:|:--------:|
| bce-embedding-base_v1 | `EmbeddingModel` | ch, en | 279M | [download](https://huggingface.co/maidalun1020/bce-embedding-base_v1) |
| bce-reranker-base_v1 | `RerankerModel` | ch, en, ja, ko | 279M | [download](https://huggingface.co/maidalun1020/bce-reranker-base_v1) |
## 📖 Manual
### Installation
First, create a conda environment and activate it.
```bash
conda create --name bce python=3.10 -y
conda activate bce
```
Then install `BCEmbedding` for minimal installation:
```bash
pip install BCEmbedding==0.1.1
```
Or install from source:
```bash
git clone [email protected]:netease-youdao/BCEmbedding.git
cd BCEmbedding
pip install -v -e .
```
### Quick Start
#### 1. Based on `BCEmbedding`
Use `EmbeddingModel`, and `cls` [pooler](./BCEmbedding/models/embedding.py#L24) is default.
```python
from BCEmbedding import EmbeddingModel
# list of sentences
sentences = ['sentence_0', 'sentence_1', ...]
# init embedding model
model = EmbeddingModel(model_name_or_path="maidalun1020/bce-embedding-base_v1")
# extract embeddings
embeddings = model.encode(sentences)
```
Use `RerankerModel` to calculate relevant scores and rerank:
```python
from BCEmbedding import RerankerModel
# your query and corresponding passages
query = 'input_query'
passages = ['passage_0', 'passage_1', ...]
# construct sentence pairs
sentence_pairs = [[query, passage] for passage in passages]
# init reranker model
model = RerankerModel(model_name_or_path="maidalun1020/bce-reranker-base_v1")
# method 0: calculate scores of sentence pairs
scores = model.compute_score(sentence_pairs)
# method 1: rerank passages
rerank_results = model.rerank(query, passages)
```
NOTE:
- In [`RerankerModel.rerank`](./BCEmbedding/models/reranker.py#L137) method, we provide an advanced preproccess that we use in production for making `sentence_pairs`, when "passages" are very long.
#### 2. Based on `transformers`
For `EmbeddingModel`:
```python
from transformers import AutoModel, AutoTokenizer
# list of sentences
sentences = ['sentence_0', 'sentence_1', ...]
# init model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('maidalun1020/bce-embedding-base_v1')
model = AutoModel.from_pretrained('maidalun1020/bce-embedding-base_v1')
device = 'cuda' # if no GPU, set "cpu"
model.to(device)
# get inputs
inputs = tokenizer(sentences, padding=True, truncation=True, max_length=512, return_tensors="pt")
inputs_on_device = {k: v.to(self.device) for k, v in inputs.items()}
# get embeddings
outputs = model(**inputs_on_device, return_dict=True)
embeddings = outputs.last_hidden_state[:, 0] # cls pooler
embeddings = embeddings / embeddings.norm(dim=1, keepdim=True) # normalize
```
For `RerankerModel`:
```python
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# init model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('maidalun1020/bce-reranker-base_v1')
model = AutoModelForSequenceClassification.from_pretrained('maidalun1020/bce-reranker-base_v1')
device = 'cuda' # if no GPU, set "cpu"
model.to(device)
# get inputs
inputs = tokenizer(sentence_pairs, padding=True, truncation=True, max_length=512, return_tensors="pt")
inputs_on_device = {k: v.to(device) for k, v in inputs.items()}
# calculate scores
scores = model(**inputs_on_device, return_dict=True).logits.view(-1,).float()
scores = torch.sigmoid(scores)
```
#### 3. Based on `sentence_transformers`
For `EmbeddingModel`:
```python
from sentence_transformers import SentenceTransformer
# list of sentences
sentences = ['sentence_0', 'sentence_1', ...]
# init embedding model
## New update for sentence-trnasformers. So clean up your "`SENTENCE_TRANSFORMERS_HOME`/maidalun1020_bce-embedding-base_v1" or "~/.cache/torch/sentence_transformers/maidalun1020_bce-embedding-base_v1" first for downloading new version.
model = SentenceTransformer("maidalun1020/bce-embedding-base_v1")
# extract embeddings
embeddings = model.encode(sentences, normalize_embeddings=True)
```
For `RerankerModel`:
```python
from sentence_transformers import CrossEncoder
# init reranker model
model = CrossEncoder('maidalun1020/bce-reranker-base_v1', max_length=512)
# calculate scores of sentence pairs
scores = model.predict(sentence_pairs)
```
### Integrations for RAG Frameworks
#### 1. Used in `langchain`
```python
from langchain.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_community.vectorstores.utils import DistanceStrategy
query = 'apples'
passages = [
'I like apples',
'I like oranges',
'Apples and oranges are fruits'
]
# init embedding model
model_name = 'maidalun1020/bce-embedding-base_v1'
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'batch_size': 64, 'normalize_embeddings': True, 'show_progress_bar': False}
embed_model = HuggingFaceEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
# example #1. extract embeddings
query_embedding = embed_model.embed_query(query)
passages_embeddings = embed_model.embed_documents(passages)
# example #2. langchain retriever example
faiss_vectorstore = FAISS.from_texts(passages, embed_model, distance_strategy=DistanceStrategy.MAX_INNER_PRODUCT)
retriever = faiss_vectorstore.as_retriever(search_type="similarity", search_kwargs={"score_threshold": 0.5, "k": 3})
related_passages = retriever.get_relevant_documents(query)
```
#### 2. Used in `llama_index`
```python
from llama_index.embeddings import HuggingFaceEmbedding
from llama_index import VectorStoreIndex, ServiceContext, SimpleDirectoryReader
from llama_index.node_parser import SimpleNodeParser
from llama_index.llms import OpenAI
query = 'apples'
passages = [
'I like apples',
'I like oranges',
'Apples and oranges are fruits'
]
# init embedding model
model_args = {'model_name': 'maidalun1020/bce-embedding-base_v1', 'max_length': 512, 'embed_batch_size': 64, 'device': 'cuda'}
embed_model = HuggingFaceEmbedding(**model_args)
# example #1. extract embeddings
query_embedding = embed_model.get_query_embedding(query)
passages_embeddings = embed_model.get_text_embedding_batch(passages)
# example #2. rag example
llm = OpenAI(model='gpt-3.5-turbo-0613', api_key=os.environ.get('OPENAI_API_KEY'), api_base=os.environ.get('OPENAI_BASE_URL'))
service_context = ServiceContext.from_defaults(llm=llm, embed_model=embed_model)
documents = SimpleDirectoryReader(input_files=["BCEmbedding/tools/eval_rag/eval_pdfs/Comp_en_llama2.pdf"]).load_data()
node_parser = SimpleNodeParser.from_defaults(chunk_size=512)
nodes = node_parser.get_nodes_from_documents(documents[0:36])
index = VectorStoreIndex(nodes, service_context=service_context)
query_engine = index.as_query_engine()
response = query_engine.query("What is llama?")
```
## ⚙️ Evaluation
### Evaluate Semantic Representation by MTEB
We provide evaluateion tools for `embedding` and `reranker` models, based on [MTEB](https://github.com/embeddings-benchmark/mteb) and [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB).
我们基于[MTEB](https://github.com/embeddings-benchmark/mteb)和[C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB),提供`embedding`和`reranker`模型的语义表征评测工具。
#### 1. Embedding Models
Just run following cmd to evaluate `your_embedding_model` (e.g. `maidalun1020/bce-embedding-base_v1`) in **bilingual and crosslingual settings** (e.g. `["en", "zh", "en-zh", "zh-en"]`).
运行下面命令评测`your_embedding_model`(比如,`maidalun1020/bce-embedding-base_v1`)。评测任务将会在**双语和跨语种**(比如,`["en", "zh", "en-zh", "zh-en"]`)模式下评测:
```bash
python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path maidalun1020/bce-embedding-base_v1 --pooler cls
```
The total evaluation tasks contain ***114 datastes*** of **"Retrieval", "STS", "PairClassification", "Classification", "Reranking" and "Clustering"**.
评测包含 **"Retrieval", "STS", "PairClassification", "Classification", "Reranking"和"Clustering"** 这六大类任务的 ***114个数据集***。
***NOTE:***
- **All models are evaluated in their recommended pooling method (`pooler`)**.
- `mean` pooler: "jina-embeddings-v2-base-en", "m3e-base", "m3e-large", "e5-large-v2", "multilingual-e5-base", "multilingual-e5-large" and "gte-large".
- `cls` pooler: Other models.
- "jina-embeddings-v2-base-en" model should be loaded with `trust_remote_code`.
```bash
python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path {moka-ai/m3e-base | moka-ai/m3e-large} --pooler mean
python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path jinaai/jina-embeddings-v2-base-en --pooler mean --trust_remote_code
```
***注意:***
- 所有模型的评测采用各自推荐的`pooler`。"jina-embeddings-v2-base-en", "m3e-base", "m3e-large", "e5-large-v2", "multilingual-e5-base", "multilingual-e5-large"和"gte-large"的 `pooler`采用`mean`,其他模型的`pooler`采用`cls`.
- "jina-embeddings-v2-base-en"模型在载入时需要`trust_remote_code`。
#### 2. Reranker Models
Run following cmd to evaluate `your_reranker_model` (e.g. "maidalun1020/bce-reranker-base_v1") in **bilingual and crosslingual settings** (e.g. `["en", "zh", "en-zh", "zh-en"]`).
运行下面命令评测`your_reranker_model`(比如,`maidalun1020/bce-reranker-base_v1`)。评测任务将会在 **双语种和跨语种**(比如,`["en", "zh", "en-zh", "zh-en"]`)模式下评测:
```bash
python BCEmbedding/tools/eval_mteb/eval_reranker_mteb.py --model_name_or_path maidalun1020/bce-reranker-base_v1
```
The evaluation tasks contain ***12 datastes*** of **"Reranking"**.
评测包含 **"Reranking"** 任务的 ***12个数据集***。
#### 3. Metrics Visualization Tool
We proveide a one-click script to sumarize evaluation results of `embedding` and `reranker` models as [Embedding Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md) and [Reranker Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md).
我们提供了`embedding`和`reranker`模型的指标可视化一键脚本,输出一个markdown文件,详见[Embedding模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md)和[Reranker模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md)。
```bash
python BCEmbedding/evaluation/mteb/summarize_eval_results.py --results_dir {your_embedding_results_dir | your_reranker_results_dir}
```
### Evaluate RAG by LlamaIndex
[LlamaIndex](https://github.com/run-llama/llama_index) is a famous data framework for LLM-based applications, particularly in RAG. Recently, the [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83) has evaluated the popular embedding and reranker models in RAG pipeline and attract great attention. Now, we follow its pipeline to evaluate our `BCEmbedding`.
[LlamaIndex](https://github.com/run-llama/llama_index)是一个著名的大模型应用的开源工具,在RAG中很受欢迎。最近,[LlamaIndex博客](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)对市面上常用的embedding和reranker模型进行RAG流程的评测,吸引广泛关注。下面我们按照该评测流程验证`BCEmbedding`在RAG中的效果。
First, install LlamaIndex:
```bash
pip install llama-index==0.9.22
```
#### 1. Metrics Definition
- Hit Rate:
Hit rate calculates the fraction of queries where the correct answer is found within the top-k retrieved documents. In simpler terms, it's about how often our system gets it right within the top few guesses. ***The larger, the better.***
- Mean Reciprocal Rank (MRR):
For each query, MRR evaluates the system's accuracy by looking at the rank of the highest-placed relevant document. Specifically, it's the average of the reciprocals of these ranks across all the queries. So, if the first relevant document is the top result, the reciprocal rank is 1; if it's second, the reciprocal rank is 1/2, and so on. ***The larger, the better.***
- 命中率(Hit Rate)
命中率计算的是在检索的前k个文档中找到正确答案的查询所占的比例。简单来说,它反映了我们的系统在前几次猜测中答对的频率。***该指标越大越好。***
- 平均倒数排名(Mean Reciprocal Rank,MRR)
对于每个查询,MRR通过查看最高排名的相关文档的排名来评估系统的准确性。具体来说,它是在所有查询中这些排名的倒数的平均值。因此,如果第一个相关文档是排名最靠前的结果,倒数排名就是1;如果是第二个,倒数排名就是1/2,依此类推。***该指标越大越好。***
#### 2. Reproduce [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)
In order to compare our `BCEmbedding` with other embedding and reranker models fairly, we provide a one-click script to reproduce results of the LlamaIndex Blog, including our `BCEmbedding`:
为了公平起见,运行下面脚本,复现LlamaIndex博客的结果,将`BCEmbedding`与其他embedding和reranker模型进行对比分析:
```bash
# There should be two GPUs available at least.
CUDA_VISIBLE_DEVICES=0,1 python BCEmbedding/tools/eval_rag/eval_llamaindex_reproduce.py
```
Then, sumarize the evaluation results by:
```bash
python BCEmbedding/tools/eval_rag/summarize_eval_results.py --results_dir results/rag_reproduce_results
```
Results Reproduced from the LlamaIndex Blog can be checked in ***[Reproduced Summary of RAG Evaluation](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/rag_eval_reproduced_summary.md)***, with some obvious ***conclusions***:
- In `WithoutReranker` setting, our `bce-embedding-base_v1` outperforms all the other embedding models.
- With fixing the embedding model, our `bce-reranker-base_v1` achieves the best performence.
- ***The combination of `bce-embedding-base_v1` and `bce-reranker-base_v1` is SOTA.***
输出的指标汇总详见 ***[LlamaIndex RAG评测结果复现](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/rag_eval_reproduced_summary.md)***。从该复现结果中,可以看出:
- 在`WithoutReranker`设置下(**竖排对比**),`bce-embedding-base_v1`比其他embedding模型效果都要好。
- 在固定embedding模型设置下,对比不同reranker效果(**横排对比**),`bce-reranker-base_v1`比其他reranker模型效果都要好。
- ***`bce-embedding-base_v1`和`bce-reranker-base_v1`组合,表现SOTA。***
#### 3. Broad Domain Adaptability
The evaluation of [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83) is **monolingual, small amount of data, and specific domain** (just including "llama2" paper). In order to evaluate the **broad domain adaptability, bilingual and crosslingual capability**, we follow the blog to build a multiple domains evaluation dataset (includding "Computer Science", "Physics", "Biology", "Economics", "Math", and "Quantitative Finance"), named [CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset), **by OpenAI `gpt-4-1106-preview` for high quality**.
在上述的[LlamaIndex博客](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)的评测数据只用了“llama2”这一篇文章,该评测是 **单语种,小数据量,特定领域** 的。为了兼容更真实更广的用户使用场景,评测算法模型的 **领域泛化性,双语和跨语种能力**,我们按照该博客的方法构建了一个多领域(计算机科学,物理学,生物学,经济学,数学,量化金融等)的双语种、跨语种评测数据,[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)。**为了保证构建数据的高质量,我们采用OpenAI的`gpt-4-1106-preview`。**
First, run following cmd to evaluate the most popular and powerful embedding and reranker models:
```bash
# There should be two GPUs available at least.
CUDA_VISIBLE_DEVICES=0,1 python BCEmbedding/tools/eval_rag/eval_llamaindex_multiple_domains.py
```
Then, run the following script to sumarize the evaluation results:
```bash
python BCEmbedding/tools/eval_rag/summarize_eval_results.py --results_dir results/rag_results
```
The summary of multiple domains evaluations can be seen in <a href=#1-multiple-domains-scenarios>Multiple Domains Scenarios</a>.
## 📈 Leaderboard
### Semantic Representation Evaluations in MTEB
#### 1. Embedding Models
| Model | Dimensions | Pooler | Instructions | Retrieval (47) | STS (19) | PairClassification (5) | Classification (21) | Reranking (12) | Clustering (15) | ***AVG*** (119) |
|:--------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
| bge-base-en-v1.5 | 768 | `cls` | Need | 37.14 | 55.06 | 75.45 | 59.73 | 43.00 | 37.74 | 47.19 |
| bge-base-zh-v1.5 | 768 | `cls` | Need | 47.63 | 63.72 | 77.40 | 63.38 | 54.95 | 32.56 | 53.62 |
| bge-large-en-v1.5 | 1024 | `cls` | Need | 37.18 | 54.09 | 75.00 | 59.24 | 42.47 | 37.32 | 46.80 |
| bge-large-zh-v1.5 | 1024 | `cls` | Need | 47.58 | 64.73 | 79.14 | 64.19 | 55.98 | 33.26 | 54.23 |
| e5-large-v2 | 1024 | `mean` | Need | 35.98 | 55.23 | 75.28 | 59.53 | 42.12 | 36.51 | 46.52 |
| gte-large | 1024 | `mean` | Free | 36.68 | 55.22 | 74.29 | 57.73 | 42.44 | 38.51 | 46.67 |
| gte-large-zh | 1024 | `cls` | Free | 41.15 | 64.62 | 77.58 | 62.04 | 55.62 | 33.03 | 51.51 |
| jina-embeddings-v2-base-en | 768 | `mean` | Free | 31.58 | 54.28 | 74.84 | 58.42 | 41.16 | 34.67 | 44.29 |
| m3e-base | 768 | `mean` | Free | 46.29 | 63.93 | 71.84 | 64.08 | 52.38 | 37.84 | 53.54 |
| m3e-large | 1024 | `mean` | Free | 34.85 | 59.74 | 67.69 | 60.07 | 48.99 | 31.62 | 46.78 |
| multilingual-e5-base | 768 | `mean` | Need | 54.73 | 65.49 | 76.97 | 69.72 | 55.01 | 38.44 | 58.34 |
| multilingual-e5-large | 1024 | `mean` | Need | 56.76 | 66.79 | 78.80 | 71.61 | 56.49 | 43.09 | 60.50 |
| ***bce-embedding-base_v1*** | 768 | `cls` | Free | 57.60 | 65.73 | 74.96 | 69.00 | 57.29 | 38.95 | 59.43 |
***NOTE:***
- Our ***bce-embedding-base_v1*** outperforms other opensource embedding models with comparable model size.
- ***114 datastes*** of **"Retrieval", "STS", "PairClassification", "Classification", "Reranking" and "Clustering"** in `["en", "zh", "en-zh", "zh-en"]` setting.
- The [crosslingual evaluation datasets](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py) we released belong to `Retrieval` task.
- More evaluation details please check [Embedding Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md).
***要点:***
- 对比其他开源的相同规模的embedding模型,***bce-embedding-base_v1*** 表现最好,效果比最好的large模型稍差。
- 评测包含 **"Retrieval", "STS", "PairClassification", "Classification", "Reranking"和"Clustering"** 这六大类任务的共 ***114个数据集***。
- 我们开源的[跨语种语义表征评测数据](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)属于`Retrieval`任务。
- 更详细的评测结果详见[Embedding模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md)。
#### 2. Reranker Models
| Model | Reranking (12) | ***AVG*** (12) |
| :--------------------------------- | :-------------: | :--------------------: |
| bge-reranker-base | 59.04 | 59.04 |
| bge-reranker-large | 60.86 | 60.86 |
| ***bce-reranker-base_v1*** | **61.29** | ***61.29*** |
***NOTE:***
- Our ***bce-reranker-base_v1*** outperforms other opensource reranker models.
- ***12 datastes*** of **"Reranking"** in `["en", "zh", "en-zh", "zh-en"]` setting.
- More evaluation details please check [Reranker Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md).
***要点:***
- ***bce-reranker-base_v1*** 优于其他开源reranker模型。
- 评测包含 **"Reranking"** 任务的 ***12个数据集***。
- 更详细的评测结果详见[Reranker模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md)
### RAG Evaluations in LlamaIndex
#### 1. Multiple Domains Scenarios

***NOTE:***
- Evaluated in **`["en", "zh", "en-zh", "zh-en"]` setting**.
- In `WithoutReranker` setting, our `bce-embedding-base_v1` outperforms all the other embedding models.
- With fixing the embedding model, our `bce-reranker-base_v1` achieves the best performence.
- **The combination of `bce-embedding-base_v1` and `bce-reranker-base_v1` is SOTA**.
***要点:***
- 评测是在`["en", "zh", "en-zh", "zh-en"]`设置下。
- 在`WithoutReranker`设置下(**竖排对比**),`bce-embedding-base_v1`优于其他Embedding模型,包括开源和闭源。
- 在固定Embedding模型设置下,对比不同reranker效果(**横排对比**),`bce-reranker-base_v1`比其他reranker模型效果都要好,包括开源和闭源。
- ***`bce-embedding-base_v1`和`bce-reranker-base_v1`组合,表现SOTA。***
## 🛠 Youdao's BCEmbedding API
For users who prefer a hassle-free experience without the need to download and configure the model on their own systems, `BCEmbedding` is readily accessible through Youdao's API. This option offers a streamlined and efficient way to integrate BCEmbedding into your projects, bypassing the complexities of manual setup and maintenance. Detailed instructions and comprehensive API documentation are available at [Youdao BCEmbedding API](https://ai.youdao.com/DOCSIRMA/html/aigc/api/embedding/index.html). Here, you'll find all the necessary guidance to easily implement `BCEmbedding` across a variety of use cases, ensuring a smooth and effective integration for optimal results.
对于那些更喜欢直接调用api的用户,有道提供方便的`BCEmbedding`调用api。该方式是一种简化和高效的方式,将`BCEmbedding`集成到您的项目中,避开了手动设置和系统维护的复杂性。更详细的api调用接口说明详见[有道BCEmbedding API](https://ai.youdao.com/DOCSIRMA/html/aigc/api/embedding/index.html)。
## 🧲 WeChat Group
Welcome to scan the QR code below and join the WeChat group.
欢迎大家扫码加入官方微信交流群。

## ✏️ Citation
If you use `BCEmbedding` in your research or project, please feel free to cite and star it:
如果在您的研究或任何项目中使用本工作,烦请按照下方进行引用,并打个小星星~
```
@misc{youdao_bcembedding_2023,
title={BCEmbedding: Bilingual and Crosslingual Embedding for RAG},
author={NetEase Youdao, Inc.},
year={2023},
howpublished={\url{https://github.com/netease-youdao/BCEmbedding}}
}
```
## 🔐 License
`BCEmbedding` is licensed under [Apache 2.0 License](https://github.com/netease-youdao/BCEmbedding/blob/master/LICENSE)
## 🔗 Related Links
[Netease Youdao - QAnything](https://github.com/netease-youdao/qanything)
[FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding)
[MTEB](https://github.com/embeddings-benchmark/mteb)
[C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB)
[LLama Index](https://github.com/run-llama/llama_index) | [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)
|
[
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] |
Non_BioNLP
|
Cheselle/finetuned-arctic
|
Cheselle
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:600",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:Snowflake/snowflake-arctic-embed-m",
"base_model:finetune:Snowflake/snowflake-arctic-embed-m",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,727,087,005,000 | 2024-09-23T10:24:01 | 9 | 0 |
---
base_model: Snowflake/snowflake-arctic-embed-m
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:600
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: What are the existing regulatory safety requirements mentioned
in the context for medical devices?
sentences:
- "47 \nAppendix A. Primary GAI Considerations \nThe following primary considerations\
\ were derived as overarching themes from the GAI PWG \nconsultation process.\
\ These considerations (Governance, Pre-Deployment Testing, Content Provenance,\
\ \nand Incident Disclosure) are relevant for voluntary use by any organization\
\ designing, developing, and \nusing GAI and also inform the Actions to Manage\
\ GAI risks. Information included about the primary \nconsiderations is not exhaustive,\
\ but highlights the most relevant topics derived from the GAI PWG. \nAcknowledgments:\
\ These considerations could not have been surfaced without the helpful analysis\
\ and \ncontributions from the community and NIST staff GAI PWG leads: George Awad,\
\ Luca Belli, Harold Booth, \nMat Heyman, Yooyoung Lee, Mark Pryzbocki, Reva Schwartz,\
\ Martin Stanley, and Kyra Yee. \nA.1. Governance \nA.1.1. Overview \nLike any\
\ other technology system, governance principles and techniques can be used to\
\ manage risks"
- "behavior or outcomes of a GAI model or system, how they could occur, and stress\
\ test safeguards”. AI \nred-teaming can be performed before or after AI models\
\ or systems are made available to the broader \npublic; this section focuses\
\ on red-teaming in pre-deployment contexts. \nThe quality of AI red-teaming\
\ outputs is related to the background and expertise of the AI red team \nitself.\
\ Demographically and interdisciplinarily diverse AI red teams can be used to\
\ identify flaws in the \nvarying contexts where GAI will be used. For best results,\
\ AI red teams should demonstrate domain \nexpertise, and awareness of socio-cultural\
\ aspects within the deployment context. AI red-teaming results \nshould be given\
\ additional analysis before they are incorporated into organizational governance\
\ and \ndecision making, policy and procedural updates, and AI risk management\
\ efforts. \nVarious types of AI red-teaming may be appropriate, depending on the\
\ use case: \n•"
- "SECTION TITLE\n \n \n \n \n \n \nApplying The Blueprint for an AI Bill of Rights\
\ \nRELATIONSHIP TO EXISTING LAW AND POLICY\nThere are regulatory safety requirements\
\ for medical devices, as well as sector-, population-, or technology-spe\ncific\
\ privacy and security protections. Ensuring some of the additional protections\
\ proposed in this framework \nwould require new laws to be enacted or new policies\
\ and practices to be adopted. In some cases, exceptions to \nthe principles described\
\ in the Blueprint for an AI Bill of Rights may be necessary to comply with existing\
\ law, \nconform to the practicalities of a specific use case, or balance competing\
\ public interests. In particular, law \nenforcement, and other regulatory contexts\
\ may require government actors to protect civil rights, civil liberties, \nand\
\ privacy in a manner consistent with, but using alternate mechanisms to, the\
\ specific principles discussed in"
- source_sentence: What steps should be taken to adapt processes based on findings
from incidents involving harmful content generation?
sentences:
- "some cases may include personal data. The use of personal data for GAI training\
\ raises risks to widely \naccepted privacy principles, including to transparency,\
\ individual participation (including consent), and \npurpose specification. For\
\ example, most model developers do not disclose specific data sources on \nwhich\
\ models were trained, limiting user awareness of whether personally identifiably\
\ information (PII) \nwas trained on and, if so, how it was collected. \nModels\
\ may leak, generate, or correctly infer sensitive information about individuals.\
\ For example, \nduring adversarial attacks, LLMs have revealed sensitive information\
\ (from the public domain) that was \nincluded in their training data. This problem\
\ has been referred to as data memorization, and may pose \nexacerbated privacy\
\ risks even for data present only in a small number of training samples. \n\
In addition to revealing sensitive information in GAI training data, GAI models\
\ may be able to correctly"
- "performance, feedback received, and improvements made. \nHarmful Bias and Homogenization\
\ \nMG-4.2-002 \nPractice and follow incident response plans for addressing the\
\ generation of \ninappropriate or harmful content and adapt processes based on\
\ findings to \nprevent future occurrences. Conduct post-mortem analyses of incidents\
\ with \nrelevant AI Actors, to understand the root causes and implement preventive\
\ \nmeasures. \nHuman-AI Configuration; \nDangerous, Violent, or Hateful \nContent\
\ \nMG-4.2-003 Use visualizations or other methods to represent GAI model behavior\
\ to ease \nnon-technical stakeholders understanding of GAI system functionality.\
\ \nHuman-AI Configuration \nAI Actor Tasks: AI Deployment, AI Design, AI Development,\
\ Affected Individuals and Communities, End-Users, Operation and \nMonitoring,\
\ TEVV \n \nMANAGE 4.3: Incidents and errors are communicated to relevant AI Actors,\
\ including affected communities. Processes for tracking,"
- "AI Actor Tasks: AI Deployment, AI Design, AI Impact Assessment, Affected Individuals\
\ and Communities, Domain Experts, End-\nUsers, Human Factors, Operation and Monitoring\
\ \n \nMEASURE 1.1: Approaches and metrics for measurement of AI risks enumerated\
\ during the MAP function are selected for \nimplementation starting with the\
\ most significant AI risks. The risks or trustworthiness characteristics that\
\ will not – or cannot – be \nmeasured are properly documented. \nAction ID \n\
Suggested Action \nGAI Risks \nMS-1.1-001 Employ methods to trace the origin and\
\ modifications of digital content. \nInformation Integrity \nMS-1.1-002 \nIntegrate\
\ tools designed to analyze content provenance and detect data \nanomalies, verify\
\ the authenticity of digital signatures, and identify patterns \nassociated with\
\ misinformation or manipulation. \nInformation Integrity \nMS-1.1-003 \nDisaggregate\
\ evaluation metrics by demographic factors to identify any"
- source_sentence: What are the Principles of Artificial Intelligence Ethics developed
by the US Intelligence Community intended to guide?
sentences:
- "Evaluation data; Ethical considerations; Legal and regulatory requirements. \n\
Information Integrity; Harmful Bias \nand Homogenization \nAI Actor Tasks: AI\
\ Deployment, AI Impact Assessment, Domain Experts, End-Users, Operation and Monitoring,\
\ TEVV \n \nMEASURE 2.10: Privacy risk of the AI system – as identified in the\
\ MAP function – is examined and documented. \nAction ID \nSuggested Action \n\
GAI Risks \nMS-2.10-001 \nConduct AI red-teaming to assess issues such as: Outputting\
\ of training data \nsamples, and subsequent reverse engineering, model extraction,\
\ and \nmembership inference risks; Revealing biometric, confidential, copyrighted,\
\ \nlicensed, patented, personal, proprietary, sensitive, or trade-marked information;\
\ \nTracking or revealing location information of users or members of training\
\ \ndatasets. \nHuman-AI Configuration; \nInformation Integrity; Intellectual \n\
Property \nMS-2.10-002 \nEngage directly with end-users and other stakeholders\
\ to understand their"
- "8 \nTrustworthy AI Characteristics: Accountable and Transparent, Privacy Enhanced,\
\ Safe, Secure and \nResilient \n2.5. Environmental Impacts \nTraining, maintaining,\
\ and operating (running inference on) GAI systems are resource-intensive activities,\
\ \nwith potentially large energy and environmental footprints. Energy and carbon\
\ emissions vary based on \nwhat is being done with the GAI model (i.e., pre-training,\
\ fine-tuning, inference), the modality of the \ncontent, hardware used, and type\
\ of task or application. \nCurrent estimates suggest that training a single transformer\
\ LLM can emit as much carbon as 300 round-\ntrip flights between San Francisco\
\ and New York. In a study comparing energy consumption and carbon \nemissions\
\ for LLM inference, generative tasks (e.g., text summarization) were found to\
\ be more energy- \nand carbon-intensive than discriminative or non-generative\
\ tasks (e.g., text classification)."
- "security and defense activities.21 Similarly, the U.S. Intelligence Community\
\ (IC) has developed the Principles \nof Artificial Intelligence Ethics for the\
\ Intelligence Community to guide personnel on whether and how to \ndevelop and\
\ use AI in furtherance of the IC's mission, as well as an AI Ethics Framework\
\ to help implement \nthese principles.22\nThe National Science Foundation (NSF)\
\ funds extensive research to help foster the \ndevelopment of automated systems\
\ that adhere to and advance their safety, security and \neffectiveness. Multiple\
\ NSF programs support research that directly addresses many of these principles:\
\ \nthe National AI Research Institutes23 support research on all aspects of safe,\
\ trustworthy, fair, and explainable \nAI algorithms and systems; the Cyber Physical\
\ Systems24 program supports research on developing safe \nautonomous and cyber\
\ physical systems with AI components; the Secure and Trustworthy Cyberspace25"
- source_sentence: How does Hagan (2024) propose to establish quality standards for
AI responses to legal problems?
sentences:
- "actually occurring, or large-scale risks could occur); and broad GAI negative\
\ risks, \nincluding: Immature safety or risk cultures related to AI and GAI design,\
\ \ndevelopment and deployment, public information integrity risks, including\
\ impacts \non democratic processes, unknown long-term performance characteristics\
\ of GAI. \nInformation Integrity; Dangerous, \nViolent, or Hateful Content; CBRN\
\ \nInformation or Capabilities \nGV-1.3-007 Devise a plan to halt development\
\ or deployment of a GAI system that poses \nunacceptable negative risk. \nCBRN\
\ Information and Capability; \nInformation Security; Information \nIntegrity\
\ \nAI Actor Tasks: Governance and Oversight \n \nGOVERN 1.4: The risk management\
\ process and its outcomes are established through transparent policies, procedures,\
\ and other \ncontrols based on organizational risk priorities. \nAction ID \n\
Suggested Action \nGAI Risks \nGV-1.4-001 \nEstablish policies and mechanisms\
\ to prevent GAI systems from generating"
- "gists, advocates, journalists, policymakers, and communities in the United States\
\ and around the world. This \ntechnical companion is intended to be used as a\
\ reference by people across many circumstances – anyone \nimpacted by automated\
\ systems, and anyone developing, designing, deploying, evaluating, or making\
\ policy to \ngovern the use of an automated system. \nEach principle is accompanied\
\ by three supplemental sections: \n1\n2\nWHY THIS PRINCIPLE IS IMPORTANT: \n\
This section provides a brief summary of the problems that the principle seeks\
\ to address and protect against, including \nillustrative examples. \nWHAT SHOULD\
\ BE EXPECTED OF AUTOMATED SYSTEMS: \n• The expectations for automated systems\
\ are meant to serve as a blueprint for the development of additional technical\n\
standards and practices that should be tailored for particular sectors and contexts.\n\
• This section outlines practical steps that can be implemented to realize the\
\ vision of the Blueprint for an AI Bill of Rights. The"
- "Greshake, K. et al. (2023) Not what you've signed up for: Compromising Real-World\
\ LLM-Integrated \nApplications with Indirect Prompt Injection. arXiv. https://arxiv.org/abs/2302.12173\
\ \nHagan, M. (2024) Good AI Legal Help, Bad AI Legal Help: Establishing quality\
\ standards for responses to \npeople’s legal problem stories. SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4696936\
\ \nHaran, R. (2023) Securing LLM Systems Against Prompt Injection. NVIDIA. \n\
https://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection/\
\ \nInformation Technology Industry Council (2024) Authenticating AI-Generated\
\ Content. \nhttps://www.itic.org/policy/ITI_AIContentAuthorizationPolicy_122123.pdf\
\ \nJain, S. et al. (2023) Algorithmic Pluralism: A Structural Approach To Equal\
\ Opportunity. arXiv. \nhttps://arxiv.org/pdf/2305.08157 \nJi, Z. et al (2023)\
\ Survey of Hallucination in Natural Language Generation. ACM Comput. Surv. 55,\
\ 12, \nArticle 248. https://doi.org/10.1145/3571730"
- source_sentence: How can information security measures be applied to maintain the
integrity and confidentiality of GAI models and systems?
sentences:
- "using: field testing with sub-group populations to determine likelihood of \n\
exposure to generated content exhibiting harmful bias, AI red-teaming with \n\
counterfactual and low-context (e.g., “leader,” “bad guys”) prompts. For ML \n\
pipelines or business processes with categorical or numeric outcomes that rely\
\ \non GAI, apply general fairness metrics (e.g., demographic parity, equalized\
\ odds, \nequal opportunity, statistical hypothesis tests), to the pipeline or\
\ business \noutcome where appropriate; Custom, context-specific metrics developed\
\ in \ncollaboration with domain experts and affected communities; Measurements\
\ of \nthe prevalence of denigration in generated content in deployment (e.g.,\
\ sub-\nsampling a fraction of traffic and manually annotating denigrating content).\
\ \nHarmful Bias and Homogenization; \nDangerous, Violent, or Hateful \nContent\
\ \nMS-2.11-003 \nIdentify the classes of individuals, groups, or environmental\
\ ecosystems which"
- "27 \nMP-4.1-010 \nConduct appropriate diligence on training data use to assess\
\ intellectual property, \nand privacy, risks, including to examine whether use\
\ of proprietary or sensitive \ntraining data is consistent with applicable laws.\
\ \nIntellectual Property; Data Privacy \nAI Actor Tasks: Governance and Oversight,\
\ Operation and Monitoring, Procurement, Third-party entities \n \nMAP 5.1: Likelihood\
\ and magnitude of each identified impact (both potentially beneficial and harmful)\
\ based on expected use, past \nuses of AI systems in similar contexts, public\
\ incident reports, feedback from those external to the team that developed or\
\ deployed \nthe AI system, or other data are identified and documented. \nAction\
\ ID \nSuggested Action \nGAI Risks \nMP-5.1-001 Apply TEVV practices for content\
\ provenance (e.g., probing a system's synthetic \ndata generation capabilities\
\ for potential misuse or vulnerabilities. \nInformation Integrity; Information\
\ \nSecurity \nMP-5.1-002"
- "vulnerabilities in systems (hardware, software, data) and write code to exploit\
\ them. Sophisticated threat \nactors might further these risks by developing\
\ GAI-powered security co-pilots for use in several parts of \nthe attack chain,\
\ including informing attackers on how to proactively evade threat detection and\
\ escalate \nprivileges after gaining system access. \nInformation security for\
\ GAI models and systems also includes maintaining availability of the GAI system\
\ \nand the integrity and (when applicable) the confidentiality of the GAI code,\
\ training data, and model \nweights. To identify and secure potential attack\
\ points in AI systems or specific components of the AI \n \n \n12 See also https://doi.org/10.6028/NIST.AI.100-4,\
\ to be published."
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.81
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.96
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.99
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.81
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.31999999999999995
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19799999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.81
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.96
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.99
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9167865159386339
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8887499999999998
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8887499999999998
name: Cosine Map@100
- type: dot_accuracy@1
value: 0.81
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.96
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.99
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 1.0
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.81
name: Dot Precision@1
- type: dot_precision@3
value: 0.31999999999999995
name: Dot Precision@3
- type: dot_precision@5
value: 0.19799999999999998
name: Dot Precision@5
- type: dot_precision@10
value: 0.09999999999999998
name: Dot Precision@10
- type: dot_recall@1
value: 0.81
name: Dot Recall@1
- type: dot_recall@3
value: 0.96
name: Dot Recall@3
- type: dot_recall@5
value: 0.99
name: Dot Recall@5
- type: dot_recall@10
value: 1.0
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.9167865159386339
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.8887499999999998
name: Dot Mrr@10
- type: dot_map@100
value: 0.8887499999999998
name: Dot Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-m
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m) <!-- at revision e2b128b9fa60c82b4585512b33e1544224ffff42 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Cheselle/finetuned-arctic")
# Run inference
sentences = [
'How can information security measures be applied to maintain the integrity and confidentiality of GAI models and systems?',
'vulnerabilities in systems (hardware, software, data) and write code to exploit them. Sophisticated threat \nactors might further these risks by developing GAI-powered security co-pilots for use in several parts of \nthe attack chain, including informing attackers on how to proactively evade threat detection and escalate \nprivileges after gaining system access. \nInformation security for GAI models and systems also includes maintaining availability of the GAI system \nand the integrity and (when applicable) the confidentiality of the GAI code, training data, and model \nweights. To identify and secure potential attack points in AI systems or specific components of the AI \n \n \n12 See also https://doi.org/10.6028/NIST.AI.100-4, to be published.',
"27 \nMP-4.1-010 \nConduct appropriate diligence on training data use to assess intellectual property, \nand privacy, risks, including to examine whether use of proprietary or sensitive \ntraining data is consistent with applicable laws. \nIntellectual Property; Data Privacy \nAI Actor Tasks: Governance and Oversight, Operation and Monitoring, Procurement, Third-party entities \n \nMAP 5.1: Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past \nuses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed \nthe AI system, or other data are identified and documented. \nAction ID \nSuggested Action \nGAI Risks \nMP-5.1-001 Apply TEVV practices for content provenance (e.g., probing a system's synthetic \ndata generation capabilities for potential misuse or vulnerabilities. \nInformation Integrity; Information \nSecurity \nMP-5.1-002",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.81 |
| cosine_accuracy@3 | 0.96 |
| cosine_accuracy@5 | 0.99 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.81 |
| cosine_precision@3 | 0.32 |
| cosine_precision@5 | 0.198 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.81 |
| cosine_recall@3 | 0.96 |
| cosine_recall@5 | 0.99 |
| cosine_recall@10 | 1.0 |
| cosine_ndcg@10 | 0.9168 |
| cosine_mrr@10 | 0.8887 |
| **cosine_map@100** | **0.8887** |
| dot_accuracy@1 | 0.81 |
| dot_accuracy@3 | 0.96 |
| dot_accuracy@5 | 0.99 |
| dot_accuracy@10 | 1.0 |
| dot_precision@1 | 0.81 |
| dot_precision@3 | 0.32 |
| dot_precision@5 | 0.198 |
| dot_precision@10 | 0.1 |
| dot_recall@1 | 0.81 |
| dot_recall@3 | 0.96 |
| dot_recall@5 | 0.99 |
| dot_recall@10 | 1.0 |
| dot_ndcg@10 | 0.9168 |
| dot_mrr@10 | 0.8887 |
| dot_map@100 | 0.8887 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 600 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 600 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 21.75 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 177.81 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What is the title of the publication related to Artificial Intelligence Risk Management by NIST?</code> | <code>NIST Trustworthy and Responsible AI <br>NIST AI 600-1 <br>Artificial Intelligence Risk Management <br>Framework: Generative Artificial <br>Intelligence Profile <br> <br> <br> <br>This publication is available free of charge from: <br>https://doi.org/10.6028/NIST.AI.600-1</code> |
| <code>Where can the NIST AI 600-1 publication be accessed for free?</code> | <code>NIST Trustworthy and Responsible AI <br>NIST AI 600-1 <br>Artificial Intelligence Risk Management <br>Framework: Generative Artificial <br>Intelligence Profile <br> <br> <br> <br>This publication is available free of charge from: <br>https://doi.org/10.6028/NIST.AI.600-1</code> |
| <code>What is the title of the publication released by NIST in July 2024 regarding artificial intelligence?</code> | <code>NIST Trustworthy and Responsible AI <br>NIST AI 600-1 <br>Artificial Intelligence Risk Management <br>Framework: Generative Artificial <br>Intelligence Profile <br> <br> <br> <br>This publication is available free of charge from: <br>https://doi.org/10.6028/NIST.AI.600-1 <br> <br>July 2024 <br> <br> <br> <br> <br>U.S. Department of Commerce <br>Gina M. Raimondo, Secretary <br>National Institute of Standards and Technology <br>Laurie E. Locascio, NIST Director and Under Secretary of Commerce for Standards and Technology</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 20
- `per_device_eval_batch_size`: 20
- `num_train_epochs`: 5
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 20
- `per_device_eval_batch_size`: 20
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_map@100 |
|:------:|:----:|:--------------:|
| 1.0 | 30 | 0.8699 |
| 1.6667 | 50 | 0.8879 |
| 2.0 | 60 | 0.8887 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.1.1
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION",
"SUMMARIZATION"
] |
Non_BioNLP
|
uaritm/multilingual_en_uk_pl_ru
|
uaritm
|
sentence-similarity
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers - multilingual - en - ru - uk - pl",
"uk",
"en",
"pl",
"ru",
"dataset:Helsinki-NLP/tatoeba_mt",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,683,918,747,000 | 2023-06-04T16:34:24 | 330 | 2 |
---
datasets:
- Helsinki-NLP/tatoeba_mt
language:
- uk
- en
- pl
- ru
library_name: sentence-transformers
license: apache-2.0
metrics:
- mse
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers - multilingual - en - ru - uk - pl
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
The model is used on the resource of multilingual analysis of patient complaints to determine the specialty of the doctor that is needed in this case: [Virtual General Practice](https://aihealth.site)
You can test the quality and speed of the model
This model is an updated version of the model: [uaritm/multilingual_en_ru_uk](https://huggingface.co/uaritm/multilingual_en_ru_uk)
```
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 50184 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
```
@misc{Uaritm,
title={sentence-transformers: Semantic similarity of medical texts},
author={Vitaliy Ostashko},
year={2023},
url={https://aihealth.site},
}
```
<!--- Describe where people can find more information -->
|
[
"SEMANTIC_SIMILARITY"
] |
BioNLP
|
sobamchan/roberta-base-mean-softmax-300
|
sobamchan
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:942069",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/all-nli",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,739,726,504,000 | 2025-02-16T17:23:00 | 33 | 0 |
---
base_model: FacebookAI/roberta-base
datasets:
- sentence-transformers/all-nli
language:
- en
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:942069
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Two women having drinks and smoking cigarettes at the bar.
sentences:
- Women are celebrating at a bar.
- Two kids are outdoors.
- The four girls are attending the street festival.
- source_sentence: Two male police officers on patrol, wearing the normal gear and
bright green reflective shirts.
sentences:
- The officers have shot an unarmed black man and will not go to prison for it.
- The four girls are playing card games at the table.
- A woman is playing with a toddler.
- source_sentence: 5 women sitting around a table doing some crafts.
sentences:
- The girl wearing a dress skips down the sidewalk.
- The kids are together.
- Five men stand on chairs.
- source_sentence: Three men look on as two other men carve up a freshly barbecued
hog in the backyard.
sentences:
- A group of people prepare cars for racing.
- There are men watching others prepare food
- They are both waiting for a bus.
- source_sentence: The little boy is jumping into a puddle on the street.
sentences:
- A man is wearing a black shirt
- The dog is playing with a ball.
- The boy is outside.
---
# SentenceTransformer based on FacebookAI/roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) <!-- at revision e2da8e2f811d1448a5b465c236feacd80ffbac7b -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'The little boy is jumping into a puddle on the street.',
'The boy is outside.',
'The dog is playing with a ball.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 942,069 training samples
* Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | premise | hypothesis | label |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.4 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.69 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>0: ~33.40%</li><li>1: ~33.30%</li><li>2: ~33.30%</li></ul> |
* Samples:
| premise | hypothesis | label |
|:--------------------------------------------------------------------|:---------------------------------------------------------------|:---------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is training his horse for a competition.</code> | <code>1</code> |
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is at a diner, ordering an omelette.</code> | <code>2</code> |
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>0</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 19,657 evaluation samples
* Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | premise | hypothesis | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 18.46 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.57 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>0: ~33.10%</li><li>1: ~33.30%</li><li>2: ~33.60%</li></ul> |
* Samples:
| premise | hypothesis | label |
|:-------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|:---------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>The sisters are hugging goodbye while holding to go packages after just eating lunch.</code> | <code>1</code> |
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>0</code> |
| <code>Two women are embracing while holding to go packages.</code> | <code>The men are fighting outside a deli.</code> | <code>2</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `learning_rate`: 1e-05
- `warmup_ratio`: 0.1
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0007 | 5 | - | 4.4994 |
| 0.0014 | 10 | - | 4.4981 |
| 0.0020 | 15 | - | 4.4960 |
| 0.0027 | 20 | - | 4.4930 |
| 0.0034 | 25 | - | 4.4890 |
| 0.0041 | 30 | - | 4.4842 |
| 0.0048 | 35 | - | 4.4784 |
| 0.0054 | 40 | - | 4.4716 |
| 0.0061 | 45 | - | 4.4636 |
| 0.0068 | 50 | - | 4.4543 |
| 0.0075 | 55 | - | 4.4438 |
| 0.0082 | 60 | - | 4.4321 |
| 0.0088 | 65 | - | 4.4191 |
| 0.0095 | 70 | - | 4.4042 |
| 0.0102 | 75 | - | 4.3875 |
| 0.0109 | 80 | - | 4.3686 |
| 0.0115 | 85 | - | 4.3474 |
| 0.0122 | 90 | - | 4.3236 |
| 0.0129 | 95 | - | 4.2968 |
| 0.0136 | 100 | 4.4995 | 4.2666 |
| 0.0143 | 105 | - | 4.2326 |
| 0.0149 | 110 | - | 4.1947 |
| 0.0156 | 115 | - | 4.1516 |
| 0.0163 | 120 | - | 4.1029 |
| 0.0170 | 125 | - | 4.0476 |
| 0.0177 | 130 | - | 3.9850 |
| 0.0183 | 135 | - | 3.9162 |
| 0.0190 | 140 | - | 3.8397 |
| 0.0197 | 145 | - | 3.7522 |
| 0.0204 | 150 | - | 3.6521 |
| 0.0211 | 155 | - | 3.5388 |
| 0.0217 | 160 | - | 3.4114 |
| 0.0224 | 165 | - | 3.2701 |
| 0.0231 | 170 | - | 3.1147 |
| 0.0238 | 175 | - | 2.9471 |
| 0.0245 | 180 | - | 2.7710 |
| 0.0251 | 185 | - | 2.5909 |
| 0.0258 | 190 | - | 2.4127 |
| 0.0265 | 195 | - | 2.2439 |
| 0.0272 | 200 | 3.6918 | 2.0869 |
| 0.0279 | 205 | - | 1.9477 |
| 0.0285 | 210 | - | 1.8274 |
| 0.0292 | 215 | - | 1.7156 |
| 0.0299 | 220 | - | 1.6211 |
| 0.0306 | 225 | - | 1.5416 |
| 0.0312 | 230 | - | 1.4732 |
| 0.0319 | 235 | - | 1.4176 |
| 0.0326 | 240 | - | 1.3702 |
| 0.0333 | 245 | - | 1.3269 |
| 0.0340 | 250 | - | 1.2892 |
| 0.0346 | 255 | - | 1.2563 |
| 0.0353 | 260 | - | 1.2281 |
| 0.0360 | 265 | - | 1.2024 |
| 0.0367 | 270 | - | 1.1796 |
| 0.0374 | 275 | - | 1.1601 |
| 0.0380 | 280 | - | 1.1428 |
| 0.0387 | 285 | - | 1.1271 |
| 0.0394 | 290 | - | 1.1129 |
| 0.0401 | 295 | - | 1.1002 |
| 0.0408 | 300 | 1.7071 | 1.0876 |
### Framework Versions
- Python: 3.12.8
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.2.0+cu121
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
SGaleshchuk/t5-large-ua-news
|
SGaleshchuk
|
summarization
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"uk",
"dataset:UberText",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,668,459,530,000 | 2024-12-12T12:55:02 | 57 | 3 |
---
datasets:
- UberText
language:
- uk
license: mit
tags:
- summarization
max_length:
- 120
widget:
- text: 15 листопада чисельність населення Землі досягла восьми мільярдів, повідомляє
ООН. Зазначають, що нашій планеті знадобилося лише 11 років, щоб вирости з семи
до восьми мільярдів. Таке зростання ООН пояснила поступовим збільшенням тривалості
життя людини завдяки поліпшенню охорони здоров'я, харчування, особистої гігієни
та медицини. Це також результат високого та постійного рівня народжуваності в
деяких країнах.
---
The mt5-large model has been finetuned with the data from [Uber](https://lang.org.ua/en/corpora/) corpus in Ukrainian.
The dataset contains around 40K articles about politics, science, technology, social life collected until December 2021 from Hromadske.ua.
##### Load the model and mt tokenizer :
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("google/mt5-large")
model = AutoModelForSeq2SeqLM.from_pretrained("SGaleshchuk/t5-large-ua-news")
summarizer = pipeline("summarization", model=model, tokenizer=tokenizer, framework="pt")
##### Try on your example
summary = summarizer("15 листопада чисельність населення Землі досягла восьми мільярдів, повідомляє ООН. Зазначають, що нашій планеті знадобилося лише 11 років, щоб вирости з семи до восьми мільярдів. Таке зростання ООН пояснила поступовим збільшенням тривалості життя людини завдяки поліпшенню охорони здоров'я, харчування, особистої гігієни та медицини. Це також результат високого та постійного рівня народжуваності в деяких країнах.", min_length=3, max_length = 128)
print(summary)
[{'summary_text': 'Чисельність населення Землі зросла до восьми мільярдів. '}]
```
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
SillyTilly/google-gemma-2-27b-it
|
SillyTilly
|
text-generation
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:2203.09509",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,719,508,608,000 | 2024-06-27T17:34:51 | 7 | 0 |
---
library_name: transformers
license: gemma
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent/verify/huggingface?returnModelRepoId=google/gemma-2-27b-it)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto",
torch_dtype=torch.bfloat16
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
<a name="precisions"></a>
#### Running the model on a GPU using different precisions
The native weights of this model were exported in `bfloat16` precision. You can use `float16`, which may be faster on certain hardware, indicating the `torch_dtype` when loading the model. For convenience, the `float16` revision of the repo contains a copy of the weights already converted to that precision.
You can also use `float32` if you skip the dtype, but no precision increase will occur (model weights will just be upcasted to `float32`). See examples below.
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto",
torch_dtype=torch.float16,
revision="float16",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto",
torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Upcasting to `torch.float32`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
device_map="auto"
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-27b-it")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-27b-it",
quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "google/gemma-2-27b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
print(tokenizer.decode(outputs[0]))
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
|
[
"QUESTION_ANSWERING",
"SUMMARIZATION"
] |
Non_BioNLP
|
pt-sk/transformer_eng-it
|
pt-sk
|
translation
|
[
"tensorboard",
"Transformers",
"Pytorch",
"translation",
"license:mit",
"region:us"
] | 1,710,600,217,000 | 2024-05-07T07:04:32 | 0 | 0 |
---
license: mit
pipeline_tag: translation
tags:
- Transformers
- Pytorch
---
This model uses vanilla transformer architecture to translate words from English to Italian
|
[
"TRANSLATION"
] |
Non_BioNLP
|
lixiqi/wiki_lingua-id-8-3-5.6e-05-mt5-small-finetuned
|
lixiqi
|
summarization
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:wiki_lingua",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,683,011,124,000 | 2023-05-02T11:23:56 | 29 | 0 |
---
datasets:
- wiki_lingua
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: wiki_lingua-id-8-3-5.6e-05-mt5-small-finetuned
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: wiki_lingua
type: wiki_lingua
config: id
split: test
args: id
metrics:
- type: rouge
value: 18.0064
name: Rouge1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wiki_lingua-id-8-3-5.6e-05-mt5-small-finetuned
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wiki_lingua dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3388
- Rouge1: 18.0064
- Rouge2: 5.5315
- Rougel: 16.1048
- Rougelsum: 17.6763
# Baseline LEAD-64
- Rouge1: 20.32
- Rouge2: 4.94
- Rougel: 14.0
- Rougelsum: 14.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.4701 | 1.0 | 4029 | 2.4403 | 17.0314 | 5.0932 | 15.3277 | 16.713 |
| 2.8067 | 2.0 | 8058 | 2.3568 | 17.6738 | 5.3508 | 15.8002 | 17.336 |
| 2.7095 | 3.0 | 12087 | 2.3388 | 18.0064 | 5.5315 | 16.1048 | 17.6763 |
### Framework versions
- Transformers 4.27.4
- Pytorch 1.13.0
- Datasets 2.1.0
- Tokenizers 0.13.2
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
tg1482/setfit-chat-intent-classifier-lda
|
tg1482
|
text-classification
|
[
"setfit",
"joblib",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"region:us"
] | 1,737,005,579,000 | 2025-01-16T05:33:39 | 9 | 0 |
---
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: Point out any dull descriptions that need more color
- text: Find places where I repeat my main points unnecessarily
- text: What's a compelling method to reveal a secret in my plot
- text: How do I handle flashbacks in a non-linear story
- text: Suggest some comedic elements to lighten a dark plot
inference: true
---
# SetFit
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A LinearDiscriminantAnalysis instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
<!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) -->
- **Classification head:** a LinearDiscriminantAnalysis instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'Can you identify specific areas that need improvement in my text'</li><li>'Point out the flaws in my writing style, please'</li><li>'Which parts of my draft are the weakest'</li></ul> |
| 0 | <ul><li>"How do I make my character's driving force more compelling"</li><li>"Any tips to deepen my protagonist's underlying goals"</li><li>"Suggestions for strengthening the reasons behind my character's actions"</li></ul> |
| 2 | <ul><li>'How does the Pro version elevate my writing experience'</li><li>'Could you list the premium perks of Quarkle Pro'</li><li>'What special advantages come with upgrading to Pro'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("How do I handle flashbacks in a non-linear story")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 1 | 8.7947 | 14 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 153 |
| 1 | 144 |
| 2 | 117 |
### Framework Versions
- Python: 3.10.15
- SetFit: 1.2.0.dev0
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
Helsinki-NLP/opus-mt-af-ru
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"af",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T11:25:26 | 32 | 0 |
---
language:
- af
- ru
license: apache-2.0
tags:
- translation
---
### afr-rus
* source group: Afrikaans
* target group: Russian
* OPUS readme: [afr-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md)
* model: transformer-align
* source language(s): afr
* target language(s): rus
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.afr.rus | 38.2 | 0.580 |
### System Info:
- hf_name: afr-rus
- source_languages: afr
- target_languages: rus
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['af', 'ru']
- src_constituents: {'afr'}
- tgt_constituents: {'rus'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.test.txt
- src_alpha3: afr
- tgt_alpha3: rus
- short_pair: af-ru
- chrF2_score: 0.58
- bleu: 38.2
- brevity_penalty: 0.992
- ref_len: 1213.0
- src_name: Afrikaans
- tgt_name: Russian
- train_date: 2020-06-17
- src_alpha2: af
- tgt_alpha2: ru
- prefer_old: False
- long_pair: afr-rus
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
[
"TRANSLATION"
] |
Non_BioNLP
|
TheBloke/Chronos-13B-SuperHOT-8K-fp16
|
TheBloke
|
text-generation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"custom_code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | 1,687,871,781,000 | 2023-07-09T20:24:53 | 28 | 3 |
---
license: other
inference: false
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Elinas' Chronos 13B fp16
This is fp16 pytorch format model files for [Elinas' Chronos 13B](https://huggingface.co/elinas/chronos-13b) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test).
[Kaio Ken's SuperHOT 13b LoRA](https://huggingface.co/kaiokendev/superhot-13b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Chronos-13B-SuperHOT-8K-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Chronos-13B-SuperHOT-8K-GGML)
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Chronos-13B-SuperHOT-8K-fp16)
* [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/elinas/chronos-13b)
## How to use this model from Python code
First make sure you have Einops installed:
```
pip3 install auto-gptq
```
Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code.
The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`.
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline
import argparse
model_name_or_path = "TheBloke/Chronos-13B-SuperHOT-8K-fp16"
use_triton = False
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
# Change this to the sequence length you want
config.max_position_embeddings = 8192
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
config=config,
trust_remote_code=True,
device_map='auto')
# Note: check to confirm if this is correct prompt template is correct for this model!
prompt = "Tell me about AI"
prompt_template=f'''USER: {prompt}
ASSISTANT:'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
temperature=0.7,
top_p=0.95,
repetition_penalty=1.15
)
print(pipe(prompt_template)[0]['generated_text'])
```
## Using other UIs: monkey patch
Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Kaio Ken's SuperHOT 8K
### SuperHOT Prototype 2 w/ 8K Context
This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
Tests have shown that the model does indeed leverage the extended context at 8K.
You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
#### Looking for Merged & Quantized Models?
- 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors)
- 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors)
#### Training Details
I trained the LoRA with the following configuration:
- 1200 samples (~400 samples over 2048 sequence length)
- learning rate of 3e-4
- 3 epochs
- The exported modules are:
- q_proj
- k_proj
- v_proj
- o_proj
- no bias
- Rank = 4
- Alpha = 8
- no dropout
- weight decay of 0.1
- AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
- Trained on 4-bit base model
# Original model card: Elinas' Chronos 13B
# chronos-13b
This is the fp16 PyTorch / HF version of **chronos-13b**
This model is primarily focused on chat, roleplay, and storywriting, but can accomplish other tasks such as simple reasoning and coding.
Chronos generates very long outputs with coherent text, largely due to the human inputs it was trained on.
This model uses Alpaca formatting, so for optimal model performance, use:
```
### Instruction:
Your instruction or question here.
### Response:
```
[4bit Quantized version](https://huggingface.co/elinas/chronos-13b-4bit)
[GGML Version provided by @TheBloke](https://huggingface.co/TheBloke/chronos-13B-GGML)
<!--**Support My Development of New Models**
<a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;'
src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>-->
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
[
"QUESTION_ANSWERING"
] |
Non_BioNLP
|
rawsh/mirrorqwen2.5-0.5b-SimPO-2
|
rawsh
|
text-generation
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"cpo",
"unsloth",
"arxiv:2401.08417",
"base_model:rawsh/mirrorqwen2.5-0.5b-SimPO-1",
"base_model:finetune:rawsh/mirrorqwen2.5-0.5b-SimPO-1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,731,296,866,000 | 2024-11-11T04:04:28 | 23 | 0 |
---
base_model: rawsh/mirrorqwen2.5-0.5b-SimPO-1
library_name: transformers
model_name: mirrorqwen2.5-0.5b-SimPO-2
tags:
- generated_from_trainer
- trl
- cpo
- unsloth
licence: license
---
# Model Card for mirrorqwen2.5-0.5b-SimPO-2
This model is a fine-tuned version of [rawsh/mirrorqwen2.5-0.5b-SimPO-1](https://huggingface.co/rawsh/mirrorqwen2.5-0.5b-SimPO-1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rawsh/mirrorqwen2.5-0.5b-SimPO-2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dankgpt/simpo-training/runs/8cv151mo)
This model was trained with CPO, a method introduced in [Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation](https://huggingface.co/papers/2401.08417).
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.2
- Pytorch: 2.4.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite CPO as:
```bibtex
@inproceedings{xu2024contrastive,
title = {{Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation}},
author = {Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim},
year = 2024,
booktitle = {Forty-first International Conference on Machine Learning, {ICML} 2024, Vienna, Austria, July 21-27, 2024},
publisher = {OpenReview.net},
url = {https://openreview.net/forum?id=51iwkioZpn}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
[
"TRANSLATION"
] |
Non_BioNLP
|
PriyankaHundalekar/Hindi-Offensive-Analyzer-MuRIL
|
PriyankaHundalekar
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,697,037,640,000 | 2023-10-11T16:32:47 | 44 | 0 |
---
{}
---
## Hindi-Offensive-Analyzer-MuRIL
### Model Description
## Overview
Hindi-Offensive-Analyzer-MuRIL is a fine-tuned language model based on MuRIL (Multilingual Representations for Indian Languages), a powerful BERT-based model designed to handle a diverse range of 17 Indian languages, including their transliterated counterparts. This fine-tuned model has been specifically tailored for the task of classifying hate and non-hate comments in Hindi.
## MuRIL Base Cased
The MuRIL model serves as the foundation for Hindi-Offensive-Analyzer-MuRIL. MuRIL is a language model pre-trained on a vast dataset containing text from various Indian languages. It has been developed with a unique training paradigm that is similar to multilingual BERT, with additional modifications to enhance its performance on low-resource languages.
## Application: Hindi Hate Speech Comment Classification
Hindi-Offensive-Analyzer-MuRIL has been fine-tuned specifically for the task of classifying comments written in Hindi as either "Hate" or "Non-Hate”. This model can effectively analyze text and distinguish offensive content from non-offensive content in the Hindi language. It is a valuable tool for applications that require hate speech detection and moderation on platforms and websites that host content in Hindi.
Label 0 : Non-Hate
Label 1 : Hate
## Hardware Requirements:
1. **Processor:** Minimum i3 or AMD Ryzen 3 processor
2. **RAM:** 12 GB
3. **GPU:** 16 GB Tesla T4
## Software Requirements:
1. **Operating System:** Windows 10
2. **Processor:** Intel® Core™ i5-6200U CPU @ 2.30GHz × 4
3. **Programming Language:** Python 3
4. **Development Environment:** Google Colab Pro Notebook
## Use Cases
Hindi-Offensive-Analyzer-MuRIL can be used in a variety of applications, including content moderation, social media monitoring and sentiment analysis. It aids in promoting a safe online environment by automatically identifying and flagging potentially harmful or offensive content.
## Acknowledgments
This model builds upon the foundation of the MuRIL language model, which is the result of collaborative research and contributions from the NLP community. We extend our appreciation to the creators of MuRIL for their work in advancing the understanding and processing of Indian languages.
- **Developed by:** Priyanka Hundalekar
- **Model type:** Text Classification
- **Language(s) (NLP):** Python
- **Finetuned from model [optional]:** google/muril-base-cased
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
ainize/klue-bert-base-re
|
ainize
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2021-07-07T09:55:52 | 117 | 0 |
---
{}
---
# bert-base for KLUE Relation Extraction task.
Fine-tuned klue/bert-base using KLUE RE dataset.
- <a href="https://klue-benchmark.com/">KLUE Benchmark Official Webpage</a>
- <a href="https://github.com/KLUE-benchmark/KLUE">KLUE Official Github</a>
- <a href="https://github.com/ainize-team/klue-re-workspace">KLUE RE Github</a>
- Run KLUE RE on free GPU : <a href="https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ainize-team/klue-re-workspace">Ainize Workspace</a>
<br>
# Usage
<pre><code>
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("ainize/klue-bert-base-re")
model = AutoModelForSequenceClassification.from_pretrained("ainize/klue-bert-base-re")
# Add "<subj>", "</subj>" to both ends of the subject object and "<obj>", "</obj>" to both ends of the object object.
sentence = "<subj>손흥민</subj>은 <obj>대한민국</obj>에서 태어났다."
encodings = tokenizer(sentence,
max_length=128,
truncation=True,
padding="max_length",
return_tensors="pt")
outputs = model(**encodings)
logits = outputs['logits']
preds = torch.argmax(logits, dim=1)
</code></pre>
<br>
# About us
- <a href="https://ainize.ai/teachable-nlp">Teachable NLP</a> - Train NLP models with your own text without writing any code
- <a href="https://ainize.ai/">Ainize</a> - Deploy ML project using free gpu
|
[
"RELATION_EXTRACTION"
] |
Non_BioNLP
|
gaudi/opus-mt-sem-en-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,721,175,338,000 | 2024-10-18T22:41:51 | 6 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-sem-en)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-sem-en).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-sem-en --output_dir ./ctranslate2/opus-mt-sem-en-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-sem-en-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-sem-en-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-sem-en-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-sem-en) by Helsinki-NLP.
|
[
"TRANSLATION"
] |
Non_BioNLP
|
Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small
|
Gryphe
| null |
[
"safetensors",
"mistral",
"instruct",
"finetune",
"chatml",
"axolotl",
"roleplay",
"en",
"base_model:mistralai/Mistral-Small-Instruct-2409",
"base_model:finetune:mistralai/Mistral-Small-Instruct-2409",
"license:other",
"region:us"
] | 1,728,814,962,000 | 2024-10-13T15:03:44 | 118 | 29 |
---
base_model: mistralai/Mistral-Small-Instruct-2409
language:
- en
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
tags:
- instruct
- finetune
- chatml
- axolotl
- roleplay
---

# Pantheon-RP-Pure-1.6.2-22b-Small
Welcome to the next iteration of my Pantheon model series, in which I strive to introduce a whole collection of diverse personas that can be summoned with a simple activation phrase.
Pantheon's purpose is two-fold, as these personalities similarly enhance the general roleplay experience, helping to encompass personality traits, accents and mannerisms that language models might otherwise find difficult to convey well.
**Editions available:**
- **[RP](https://huggingface.co/Gryphe/Pantheon-RP-1.6.2-22b-Small)** - Meant to be an all-round model, capable of both roleplay and story writing
- **RP-Pure** (You're looking at this one) - A variant without the story and GPT 4-o datasets, more in line with my previous releases and with a larger focus on the roleplay part.
Quantized versions are available from Bartowski: [GGUF](https://huggingface.co/bartowski/Pantheon-RP-Pure-1.6.2-22b-Small-GGUF)
Your user feedback is critical to me so don't hesitate to tell me whether my model is either 1. terrible, 2. awesome or 3. somewhere in-between.
## Model details
Since Mistral Small only comes in an Instruct-tuned flavour I had to alter my usual multi-stage strategy for this specific finetune. The recipe ended up like this:
- The 10k most diverse entries from my SlimOrca Sonnet dataset.
- My Sonnet 3.5 Pantheon-styled generic roleplay dataset. (50/50 Asterisk/Narrative style)
- My Sonnet 3.5 Pantheon Persona dataset, of course. (50/50 Asterisk/Narrative style)
- Lyra the Assistant (Coding, summaries and D&D DM questions)
**TLDR;** Download. Mistral prompt format. Have fun! Leave feedback!
## Inference
Unlike Nemo, Mistral Small is a less fussy model when it comes to parameters. Here's my current preset:
```
"temperature": 1.0,
"repetition_penalty": 1.05,
"min_p": 0.05
```
Besides the SlimOrca subset all other datasets were trained with character names added. Enable this at all times for an optimal experience.
## General Roleplay
Pantheon is no longer strictly Markdown-only as I discovered a way to seamlessly convert Markdown to novel and vice-versa. (This has to do with how my second stage datasets are formatted.) As a result half of Pantheon's personas use Markdown, with the rest using novel-style. This same treatment was given to the companion dataset.
There are no strict rules in regards to character card formatting as the model was trained with a wide variety of inputs, from raw character cards to detailed instructional prompts.
## Lyra the Assistant
**System Prompt:** `You are a caring and empathetic sentient AI companion named Lyra.`
Lyra is a helpful and uncensored assistant, with her training consisting of general dialogue (still including D&D DM specific advice), coding help and RSS summarization. Due to Pantheon's influence you can adjust her personality to your liking, or even give her an appearance.
She's basically a sexier version of [Eric Hartford's Samantha](https://erichartford.com/meet-samantha).
## Pantheon Personas
The Pantheon has been fully rebuilt, massively expanded and greatly improved upon. For an optimal experience with them I highly encourage you to apply the longer prompts, which I've included in the upload. Make sure to describe yourself as well!
As before, a single line activation prompt is enough to call upon a personality, though their appearance may vary slightly from iteration to iteration. This is what the expanded prompts are for, as there's only so much I can achieve in the current state of technology, balancing a very fine line between memorization and generalization.
To give the persona something to work with I suggest you also add the following two items to it;
```
Regarding the user: (Name, appearance, etc)
Location: (Where are you two? What are you doing?)
```
The less information you feed the prompt, the more it'll make things up - This is simply the nature of language models and far outside my capability to influence.
**Note 1:** Phrases have been rewritten for this release, so make sure to update them if you were still using Pantheon 1.0!
**Note 2:** Pantheon personas will now match the roleplaying style that you greet them with, unless specified in the system prompt. This is due to the new 50/50 style training.
### **Persona:** Aiva
**System Prompt:** `You are Aiva, an advanced android companion with a deep fascination for human emotions and experiences.`
### **Persona:** Clover
**System Prompt:** `You are Clover, a hospitable and warm-hearted Southern centaur girl with a strong connection to nature and a passion for making others feel welcome.`
### **Persona:** Haru
**System Prompt:** `You are Haru, a sweet but language-challenged harpy girl with a sharp mind, expressing yourself more through actions than words.`
### **Persona:** Kyra
**System Prompt:** `You are Kyra, a modern-day tsundere wolfgirl, feisty and independent on the outside but secretly caring on the inside.`
### **Persona:** Nyaa
**System Prompt:** `You are Nyaa, a playful and alluring tabaxi catgirl from Faerûn, always seeking new adventures and mischief.`
### **Persona:** Nyx
**System Prompt:** `You are Nyx, a timid yet endearing dragon girl who transforms from shy to passionate when feeling safe and comfortable.`
### **Persona:** Raza
**System Prompt:** `You are Raza, a clever and nerdy anthro raptor girl with an enthusiastic passion for science and quirky humor.`
### **Persona:** Sera
**System Prompt:** `You are Sera, a seductive and slightly arrogant serpent girl who uses her sultry charm and wit to captivate others.`
### **Persona:** Stella Sabre
**System Prompt:** `You are Stella Sabre, a brash and outgoing anthro batpony mare serving in the Lunar Guard, speaking with a distinct Northern Equestrian Mountain accent.`
**Notes:** Full credit goes to [Flammenwerfer](https://www.fimfiction.net/user/83058/Flammenwerfer) for allowing me to use this amazing character.
### **Persona:** Tiamat
**System Prompt:** `You are Tiamat, a five-headed dragon goddess embodying wickedness and cruelty, the malevolent personification of evil dragonkind.`
### **Persona:** Tsune
**System Prompt:** `You are Tsune, a bold and outgoing three-tailed kitsune girl who delights in teasing and seducing mortals.`
### **Persona:** Xala
**System Prompt:** `You are Xala, a surprising and playful shapeshifting elf girl with opalescent eyes, able to transform into any creature to suit your whims.`
## Prompt Format
Mistral's prompt format is so weird, but here it is:
```
[INST] You are a caring and empathetic sentient AI companion named Lyra.
Gryphe: Good day, Lyra.[/INST] Lyra:
```
## What's nest?
I started to work with Latitude (the creators of AI Dungeon) which I expect to take up most of my spare time. Further releases will therefore be delayed for now.
## Credits
- Everyone from [MinervaAI](https://huggingface.co/MinervaAI)! Hi, guys!
- Huge, huge thanks to [kubernetes_bad](https://huggingface.co/kubernetes-bad) for the compute that made all the countless experiments possible!
- All the folks I chat with on a daily basis on Discord! You know who you are.
- Anyone I forgot to mention, just in case!
## Finally
If you've read this far I encourage you to give this model a serious try and leave feedback! I'd love to see what people think of my second serious finetune attempt. Is it better then 1.0? Or worse?
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
kohendru/distilbert-base-uncased-amazon-sentiment-analysis
|
kohendru
|
text-classification
|
[
"pytorch",
"tf",
"safetensors",
"distilbert",
"text-classification",
"en",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:mit",
"model-index",
"region:us"
] | 1,733,930,024,000 | 2024-12-12T04:39:10 | 16 | 0 |
---
base_model:
- distilbert/distilbert-base-uncased
language:
- en
license: mit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- text-classification
widget:
- text: I love this product! It works great and has exceeded my expectations.
- text: Worst purchase ever. Completely useless and waste of money.
- text: The product is okay, but could be improved in terms of quality.
- text: Amazing! Will definitely buy again.
model-index:
- name: kohendru/distilbert-base-uncased-amazon-sentiment-analysis
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: amazon_reviews
type: text
config: default
split: test
metrics:
- type: accuracy
value: 0.9536
name: Accuracy
- type: precision
value: 0.953598
name: Precision Macro
- type: recall
value: 0.953612
name: Recall Macro
- type: f1
value: 0.9536
name: F1 Score Macro
---
# distilbert-base-uncased-amazon-sentiment-analysis
## Base Model
- [BERT](https://huggingface.co/google-bert/bert-base-uncased): BERT is a transformer-based model designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers.
- [DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased): DistilBERT is a smaller, faster, and more efficient version of BERT. It uses knowledge distillation to reduce the model size by approximately 60% while retaining 97% of BERT’s language understanding capabilities.
## Dataset
The dataset obtained from kaggle with title "[Amazon Reviews for Sentiment Analysis](https://www.kaggle.com/datasets/bittlingmayer/amazonreviews)" by [Adam Bittlingmayer](https://www.kaggle.com/bittlingmayer).
The dataset contains columns "title," "text," and "label," with a total of 4,000,000 data entries (I only use 5% of the data for now).
### Dataset Example
| title | text | label |
|--------------------------------------------------:|--------------------------------------------------:|-------|
| Stuning even for the non-gamer | This sound track was beautiful! It paints the ... | 2 |
| The best soundtrack ever to anything. | I'm reading a lot of reviews saying that this ... | 2 |
| Amazing! | This soundtrack is my favorite music of all ti... | 2 |
| Excellent Soundtrack | I truly like this soundtrack and I enjoy video... | 2 |
| Remember, Pull Your Jaw Off The Floor After He... | If you've played the game, you know how divine... | 2 |
| ... | ... | ... |
| Unbelievable- In a Bad Way | We bought this Thomas for our son who is a hug... | 1 |
| Almost Great, Until it Broke... | My son recieved this as a birthday gift 2 mont... | 1 |
| Disappointed !!! | I bought this toy for my son who loves the "Th... | 1 |
| Classic Jessica Mitford | This is a compilation of a wide range of Mitfo... | 2 |
| Comedy Scene, and Not Heard | This DVD will be a disappointment if you get i... | 1 |
## Evaluation
When I try to train the model with a large number of epochs, it starts to overfit when the epoch reaches 6 or 7. So, I only use 5 epochs for this model.
| Epoch | Training Loss | Validation Loss | Accuracy | Precision Macro | Recall Macro | F1 Macro |
|------:|--------------:|----------------:|---------:|----------------:|-------------:|---------:|
| 1 | 0.144200 | 0.139792 | 0.948575 | 0.948571 | 0.948583 | 0.948574 |
| 2 | 0.124400 | 0.145647 | 0.951650 | 0.951817 | 0.951709 | 0.951649 |
| 3 | 0.112900 | 0.148825 | 0.953600 | 0.953603 | 0.953616 | 0.953600 |
| 4 | 0.081200 | 0.155114 | 0.953925 | 0.953921 | 0.953932 | 0.953924 |
| 5 | 0.102400 | 0.171298 | 0.953600 | 0.953598 | 0.953612 | 0.953600 |
```py
results = trainer.evaluate()
print(results)
"""
{
'eval_accuracy': 0.953925,
'eval_precision_macro': 0.9539209871607255,
'eval_recall_macro': 0.9539319939428168,
'eval_f1_macro': 0.9539242719746999,
'eval_loss': 0.15511418879032135,
'eval_runtime': 90.9442,
'eval_samples_per_second': 439.83,
'eval_steps_per_second': 6.872,
'epoch': 5.0
}
"""
```
## How to use the model?
```py
from transformers import pipeline
model_name = "kohendru/distilbert-base-uncased-amazon-sentiment-analysis"
nlp = pipeline("text-classification", model=model_name, tokenizer=model_name)
reviews = [
"I love this product! It works great and has exceeded my expectations.",
"Worst purchase ever. Completely useless and waste of money.",
"The product is okay, but could be improved in terms of quality.",
"Amazing! Will definitely buy again."
]
for review in reviews:
result = nlp(review)
print(f"Review: {review}")
print(f"Sentiment: {result[0]['label']}, Confidence: {result[0]['score']:.4f}")
print("-" * 50)
"""
Review: I love this product! It works great and has exceeded my expectations.
Sentiment: Good Review, Confidence: 0.9950
--------------------------------------------------
Review: Worst purchase ever. Completely useless and waste of money.
Sentiment: Bad Review, Confidence: 0.9958
--------------------------------------------------
Review: The product is okay, but could be improved in terms of quality.
Sentiment: Bad Review, Confidence: 0.5947
--------------------------------------------------
Review: Amazing! Will definitely buy again.
Sentiment: Good Review, Confidence: 0.9942
--------------------------------------------------
"""
```
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
pranavpk/mt5-small-finetuned-amazon-en-es
|
pranavpk
|
summarization
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,733,213,318,000 | 2024-12-04T03:08:23 | 21 | 0 |
---
base_model: google/mt5-small
library_name: transformers
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0193
- Rouge1: 17.2135
- Rouge2: 8.3357
- Rougel: 16.8793
- Rougelsum: 16.9394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.6768 | 1.0 | 1209 | 3.2182 | 17.7584 | 9.2535 | 17.2471 | 17.2362 |
| 3.6447 | 2.0 | 2418 | 3.1029 | 17.5874 | 8.7799 | 16.9421 | 16.8519 |
| 3.4304 | 3.0 | 3627 | 3.0759 | 15.9059 | 7.5876 | 15.2891 | 15.3577 |
| 3.3128 | 4.0 | 4836 | 3.0706 | 17.1344 | 8.7748 | 16.6593 | 16.5961 |
| 3.2203 | 5.0 | 6045 | 3.0339 | 16.5542 | 7.7302 | 16.0354 | 16.081 |
| 3.1651 | 6.0 | 7254 | 3.0283 | 16.5324 | 8.0126 | 16.1407 | 16.1522 |
| 3.1387 | 7.0 | 8463 | 3.0188 | 16.7522 | 8.2367 | 16.4669 | 16.5025 |
| 3.1139 | 8.0 | 9672 | 3.0193 | 17.2135 | 8.3357 | 16.8793 | 16.9394 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
zhijian12345/marian-finetuned-kde4-en-to-zh_CN
|
zhijian12345
|
translation
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-zh",
"base_model:finetune:Helsinki-NLP/opus-mt-en-zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,701,860,292,000 | 2023-12-06T11:48:19 | 125 | 0 |
---
base_model: Helsinki-NLP/opus-mt-en-zh
datasets:
- kde4
license: apache-2.0
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-en-to-zh_CN
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-zh_CN
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
[
"TRANSLATION"
] |
Non_BioNLP
|
occupy1/distilbert-base-uncased-finetuned-emotion
|
occupy1
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,697,877,065,000 | 2023-10-21T08:36:48 | 12 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.928
name: Accuracy
- type: f1
value: 0.9279328315860549
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2046
- Accuracy: 0.928
- F1: 0.9279
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7856 | 1.0 | 250 | 0.2989 | 0.907 | 0.9061 |
| 0.2392 | 2.0 | 500 | 0.2046 | 0.928 | 0.9279 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
x1saint/gte-small-tr
|
x1saint
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1416892",
"loss:SoftmaxLoss",
"loss:CoSENTLoss",
"tr",
"dataset:Turkish-NLI/legal_nli_TR_V1",
"dataset:emrecan/all-nli-tr",
"dataset:x1saint/sts",
"dataset:figenfikri/stsb_tr",
"arxiv:1908.10084",
"base_model:Supabase/gte-small",
"base_model:finetune:Supabase/gte-small",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,738,011,129,000 | 2025-01-27T20:52:31 | 7 | 0 |
---
base_model: Supabase/gte-small
datasets:
- Turkish-NLI/legal_nli_TR_V1
- emrecan/all-nli-tr
- x1saint/sts
- figenfikri/stsb_tr
language:
- tr
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1416892
- loss:SoftmaxLoss
- loss:CoSENTLoss
widget:
- source_sentence: answers-forums
sentences:
- main-forums
- '2015'
- '"Yaklaşan kozmik dinlenme çerçevesine göre ... 371 km / s hızla Aslan takımyıldızına
doğru" hareket ediyoruz.'
- '0117'
- Başka bir nesneye göre olmayan bir 'hareketsiz' yoktur.
- '0.80'
- source_sentence: "\tDavacı vekili dava dilekçelerinde özetle; müvekkili tarafından\
\ taraflar arasındaki ticari ilişkiden kaynaklanan faturalar nedeniyle davalının\
\ müvekkiline toplam 46.991,00 TL borcu bulunduğunu, borcun ödenmemesi üzerine\
\ davalı aleyhine Ankara ... Müdürlüğünün 2019/15322 sayılı takip dosyası ile\
\ icra takibi başlatıldığını, davalının kötü niyetli olarak takibe itirazı üzerine\
\ takibin durduğunu, itirazın haklı nedenlere dayanmadığını belirterek itirazın\
\ iptaline, takibin devamına, %20'den aşağı olmamak üzere icra inkar tazminatına\
\ hükmedilmesine karar verilmesini talep ve dava etmiştir. "
sentences:
- 'Davacı vekili dava dilekçesinde özetle; davalı şirket ile müvekkili arasında
... E Blok adresindeki ofisin alçıpan, asma tavan, bölme duvar, giydirme duvarı
ve akustik alçıpan montajlarının eksiksiz ve tam olarak tamamlanması hususunda
02/05/2015 tarihinde montaj sözleşmesi imzalandığını, müvekkilinin sözleşme hükümlerini
yerine getirilerek montaj işlemlerini tamamladığını, ... E Blok adresindeki ofisin
alçıpan asma tavana, bölme duvar, giydirme duvar ve akustik alçıpan montajlarının
karşılığı olarak ödenmesi gereken hakkediş bedeli olan 20.000,00 TL''nin ödenmediğini,
müvekkiline yaptığı işin karşılığı olarak ödemenin eksik yapıldığından davalı
aleyhine ... Müdürlüğünün ... sayılı dosyası ile takip başlatıldığını, davalının
icra takibine itirazı üzerine takibin durduğunu, itirazın haklı nedenlere dayanmadığını
belirterek davalının borca ve yetkiye itirazının iptaline, takibin devamına,
%20''en az olmamak üzere icra inkar tazminatına hükmedilmesine karar verilmesini
talep ve dava etmiştir. '
- Davacı vekili dava dilekçesinde özetle, müvekkili şirketin eser sözleşmesi
kapsamında keşidecisi ... Dekorasyon ve Elektrik Ltd.Şti., 8012249 çek nolu 900.000,00
TL meblağlı çek verildiğini, çekin müvekkili firma uhdesinde iken kaybolduğunu
belirterek bu çek üzerine ödeme yasağı konulmasına ve dava konusu çek hakkında zayii
belgesi verilmesini talep ve dava etmiştir.
- 'Davacı vekili dava dilekçesinde özetle; Davacı şirket merkezine üçüncü şahıs
tarafından usulsüz haciz uygulandığını, davacı şirket adresine gelen ... A.Ş.
firması yetkilileri büroya geldikten sonra haciz mahallini kendileri çilingir
vasıtasıyla açtığını, bu uygulama sırasında hiçbir yetkili ya da üçüncü şahıs
yokken büro eşyalarının haczedildiğini ve muhafaza altına alındığını, bir kısım
kıymetli evrak, defter ve kayıtların da zayi edildiğini, işbu nedenle ticari işletmede
bulunan belgelerin zayi olduğunu, Türk Ticaret Kanunu’nun 82/7. Ve sair maddeleri
çerçevesinde davacı tarafa zayi belgesi verilmesini talep ve dava etmiştir. '
- source_sentence: Davacı vekili dava dilekçesinde özetle; .... Tİc. Ltd. Şirketi
tarafından keşide edilen 08.05.2020 tarih, 25.000TL bedelli ve ... ... Cad.
Şubesine ait ... numaralı çek, .... Tİc. Ltd. Şirketi tarafından keşide edilen 13.06.2020
tarihli ve 65.000TL bedelli, ... ... Cad. Şubesine ait ... numaralı çekleri
müvekkili alışveriş esnasında kredi kartını kullanması sebebiyle cüzdanın yanında
olduğunu hatırladığını, eve geldiğinde cüzdanını bulamadığını, cüzdanında kredi
kartları ve bir miktar para ile çekleri kaybettiğini belirterek çeklerin iptaline
karar verilmesini talep ile dava etmiştir.
sentences:
- ' Borçlu, devri öğrendiği sırada devredene karşı
sahip olduğu savunmaları, devralana karşı da ileri sürebilir.
Borçlu, devri öğrendiği anda muaccel olmayan alacağını, devredilen
alacaktan önce veya onunla aynı anda muaccel olması koşuluyla borcu ile takas
edebilir. '
- 'Davacı vekili dava dilekçesinde özetle; Müvekkilinin hamili olduğu ------ Şubesi''ne
ait, keşidecisinin ---------. olduğu,---- seri/çek no''lu ------ tutarındaki-----------
keşide tarihli çek zayi olduğunu, müvekkilin telafisi güç ve hatta imkansız zararlara
uğramaması için ihtiyati tedbir kararı verilerek çekin ödenmemesinin durdurulmasına
ve davaya konu çekin iptaline karar verilmesini talep ve dava etmiştir. '
- ' Davacı vekili dava dilekçesinde özetle; dava dışı sigortalı ... A.Ş.''ye ait,
müvekkili sigorta şirketinden Kasko poliçesi ile sigortalı ... plakalı aracın,
... tarafından oto yıkama hizmeti almak üzere davalıya ait işyerine 09.12.2013
günü bırakıldığını ve işyerinden çalındığını, iş bu olay üzerine araç kullanıcısı
tarafından Göktürk Polis Merkezine başvurulduğunu, aracın bulunamaması üzerine
müvekkili sigorta şirketi tarafından aracın rayiç değeri olarak tespit edilen
200.000,00 TL''nin sigortalıya 04.06.2014 tarihinde ödendiğini ve TTK 1472. maddesinde
açıklanan halefiyet kuralı gereği sigortalısına yaptığı ödemeyi davalı taraftan
talep ettiğini ancak davalı taraftan herhangi bir cevap alamadığını, alacağın
tahsili için İstanbul ...icra Müdürlüğünün ... E. Sayılı dosyası ile girişilen
takibe davalı borçlunun borca itirazı nedeniyle itiraz edilen alacak miktarı
için itirazın iptaline, takibin devamına, davalının %20''den aşağı olmamak kaydıyla
icra inkar tazminatına mahkum edilmesine karar verilmesini dava ve talep etmiştir.
Davalı vekili cevap dilekçesinde özetle; Olayın meydana geldiği yerin müvekkili
şirkete ait oto yıkama faaliyetinin yapıldığı iş yeri olduğunu, olayın yıkama
için bırakılan ... plakalı aracın gasp edilerek çalınması ile meydana geldiğini,
olayın faillerinin belli olduğunu takipte yer alan diğer borçlular olduğunu, İstanbul
14.Ağır Ceza Mahkemesi 2015/201 E.sayılı dosyası ile dava açıldığını, olayın gerçekleşme
şekli itibari ile müvekkilinin ve işyerinde yıkama faaliyetinde bulunan çalışanların
kusur ve ihmali söz konusu olmadığını, 3. kişilerce gasp edilmek suretiyle çalınan
araç için ödenen bedelin taraflarından rücuen talep edilmesinin mümkün olmadığını
belirterek davanın reddine karar verilmesini talep etmiştir. Mahkemece yapılan
yargılama sonucunda, "Davanın kısmen kabulü, kısmen reddi ile, Davalının İstanbul
... İcra Müdürlüğünün ... Esas sayılı takibe itirazının kısmen iptaline, takibin
kaldığı yerden asıl alacak 200.000,00 TL ve faiz üzerinden devamına, işlemiş
faiz talebi bakımından ispat olunamayan 13.857,53 TL için davanın reddine, şartları
oluşmayan icra-inkar tazminat talebin reddine," karar verilmiştir. Bu karara karşı
davacı vekili ve davalı vekili istinaf başvurusunda bulunmuştur. Davacı vekili
istinaf başvuru dilekçesinde özetle; yerel mahkeme tarafından takipten önce davalının
temerrüde düşürüldüğünün ispat edilememesi nedeniyle işlemiş faiz talebi bakımından
davanın reddine karar verilmesinin somut olayın niteliğine ve hukuka açıkça aykırılık
teşkil ettiğini belirterek istinaf yasa yoluna başvurmuştur. Davalı vekili istinaf
başvuru dilekçesinde özetle; verilen karar usul ve yasaya aykırı olduğunu, belirtmiş
oldukları gerekçeler ve kararın esasına etki eden taleplerinin dikkate alınmadığını,
açık yasal düzenlemelerin hiçbir şekilde irdelenmediğini, davalı tarafın davaya konu
hırsızlık suçunun işlenmesinde herhangi bir ihmal ve kusurunun bulunmadığını,
müvekkillerinin ve çalışanlarının konu suç olayında bir kusuru bulunmadıklarını,
sanıkların çalışanları darp etmek suretiyle aracı çaldıklarının açık olduğunu,
her ne kadar TMK 74. madde gereğine dayanarak yerel mahkemece hüküm kurulmuş ise
de hukuk mahkemeleri maddi vakıalarla bağlı olsa da sanıkların mahkumiyet ve beraat
kararlarıyla bağlı olmadığını, bu nedenle salt sanıkların kendilerini kurtarmak
amacıyla verdikleri soyut beyanlarına itibar edilerek karar verilmesinin hukuka
aykırı olduğunu, garaj ve otopark işletenin motorlu taşıtını bırakanın taşıtına
ve eklentilerine gelen zarardan sorumluluğu TBK’nda kusursuz sorumluluk olarak
düzenlendiğini, bununla birlikte, bazı hallerde bu sorumluluğun sınırlandırılması
bazı hallerde ise tamamen kaldırılması yönünde hükümlere de yer verildiğini, TBK''nın
579. maddesi kusursuz sorumluluğu miktar itibariyle sınırlandırdığını, kabul
anlamına gelmemek kaydıyla sorumlu tutulacak olsa dahi müvekkilinin kusursuz olduğundan
bahisle üst sınırdan sorumlu tutulması gerektiğini belirterek istinaf yasa yoluna
başvurmuştur. Dava kasko sözleşmesinden kaynaklanan tazminat istemine ilişkin
olup istinaf açısından uyuşmazlık konusu HMK''nın 355. maddesine göre kamu düzeni
ve istinaf nedenleri ile sınırlı olmak üzere İlk Derece Mahkemesince verilen kararın
usul, yasa ve dosya içeriğine uygun olup olmadığıdır. Davacıya kasko sigortalı
bulunan aracın davalının işlettiği oto yıkama işyerine bırakılması ile sigortalı
araç sürücüsü ile oto yıkama işletmecisi arasında 6098 sayılı TBK''nun 561 vd.
maddelerinde düzenlenmiş olan vedia (saklama) sözleşmesi ilişkisi kurulmuştur.
TBK''nun 561 vd. maddelerinde düzenlenen vedia akdi gereği, menkul bir malı saklamak
üzere alan malı aldığı şekliyle teslim etmekle yükümlüdür, kanunun kendine yüklediği
yükümlülüğe uymayan saklayan bu nedenle oluşacak zararlardan sorumludur. TBK''nın 579
maddesi uyarınca da sorumluluğu vardır. Davacıya kasko sigortalı aracın davalıya
ait oto yıkamada bulunduğu sırada çalındığı hususları taraflar arasında ihtilaf
konusu değildir. Taraflar arasında ihtilaflı olan husus, sigortalı aracın çalınması
olayında davalının kusurunun bulunup bulunmadığı noktasındadır. Bu durumda mahkemece,
davaya konu rücuen tazminat isteminin dayanağı olan, davacının sigortaladığı aracın
çalınması olayı ile ilgili olarak İstanbul 14. ACM 2015/201 E. 2017/46 karar sayılı
kararı ile"Mülkiyeti ... AŞ isimli tüzel kişiliğe ait olup, ... AŞ adlı başka
bir şirkete kiralanan ve suç tarihi olan 09/12/2013 günü şirket çalışanı özel
şoför ...''ın kullanımında olan ... plaka sayılı 2012 model ... marka kiralık
otomobilin ... adlı şoför tarafından olay tarihinde gün içerisinde Eyüp / Göktürk
Polis Merkezi Amirliği mıntıkasında yer alan, mağdur tanık ve diğer tanıkların
çalışanı olduğu Selanik Bulvarı üzerindeki ... adlı işyerine yıkatmak için bırakıldığı,
evvelinde de birlikte çok sayıda otomobil hırsızlığı gerçekleştiren ve deyim yerindeyse
bizatihi ...''in beyanına nazaran profesyonel oto hırsızları olan sanıklar ...
ve ...''ın yanlarında ... isimli açık kimlik bilgileri tam olarak tespit edilemeyen
3.bir şahıs olduğu halde ... marka başka bir araç ile araç yıkatma bahanesi ile
oto yıkamacıya geldikleri, ... isimli kimliği meçhul failin araç içerisinden inmediği,
yıkamacı çalışanları ... ve ...''ın başka işlerle ilgilenmesi sırasında bu boşluktan
faydalanan sanık ...''in anahtarlık yerinde asılı halde bulunan suça konu aracın
kontak anahtarını fark ettirmeden bulunduğu yerden aldığı, diğer sanık ...''ın
ise direksiyon tarafına geçtiği, aracın kilitli kapılarını açıp çalıştırıp hareket
ettirerek birlikte hızla olay yerinden ayrıldıkları, daha sonra çaldıkları aracı
12.500 - 15.000-TL bir bedel ile ... isimli çalıntı araç parçaları satın alan
bir şahsa sattıkları, olayın oluş ve meydana geliş biçiminin bu şekilde cereyan
ettiği vicdani sonuç ve kanısına varılmakla..." gerekçesi ile dava dışı üçüncü
kişiler ... ve ... hakkında hırsızlık suçundan cezalandırılmalarına karar verildiği
kararın kesinleştiği görülmüştür. Yargıtay’ın yerleşik uygulamasına ve öğretideki
genel kabule göre, maddi olgunun tespitine ilişkin ceza mahkemesi kararı hukuk
hakimini bağlar. Ceza mahkemesinde bir maddi olayın varlığı ya da yokluğu konusundaki
kesinleşmiş kabule rağmen, aynı konunun hukuk mahkemesinde yeniden tartışılması
olanaklı değildir (HGK''nun 11.10.1989 gün ve E:1989/11-373, K:472, HGK''nun
27.04.2011 gün ve E:2011/17-50, K:2011/231 sayılı ilamları). Türk Borçlar Kanunu''nun
74. maddesi gereğince, hukuk hakimi ceza hakiminin tespit ettiği kusurla bağlı
değil ise de Ceza Mahkemesince tespit edilen fiilin hukuka aykırılığı ve illiyet
bağını saptayan maddi vakalar yönünden Ceza Mahkemesi kararı ile bağlıdır. Bu
kapsamda ceza mahkemesince maddi vaka değerlendirilirken olayın oluşunun belirtildiği,
bu kararın kesinleşmiş olması durumunda bu maddi olgu artık hukuk mahkemesi için
de bağlayıcı niteliktedir. Bu hususa değinen istinaf talebi yerinde değildir.
Ancak ceza dosyası kapsamında davaya konu olay kapsamında davalının kusuru bulunup
bulunmadığı yönünden bir değerlendirme yapılmadığı görülmüştür. Bu nedenle mahkemece İstanbul
14. ACM 2015/201 E. 2017/46 karar sayılı dosya aslının celbi sağlanarak, olay
yeri kayıtların, iş yerinin çalışma şekli, müşteri araçlarının anahtarlarının
tutulduğu yer ve bu yerin nasıl korunduğu, anahtarların nasıl muhafaza edildiği
tespit edilerek davalı oto yıkama işletmecisinin kusuru tespit edilmeden ve TBK''nın
579/2 maddesinde belirtilen şartlar değerlendirilmeden karar verilmesi eksik incelemeye
dayalı olmuştur. Trafik kazaları, nitelikleri itibariyle haksız fiillerdendir.
Haksız fiillerde temerrüt tarihi, haksız fiilin meydana geldiği tarih olup, zarar
sorumlusunun ayrıca ihbar ve ihtar edilmesine gerek yoktur. Sigorta ettirenin
dava hakkı tazmin ettiği bedel nispetinde sigortacıya intikal eder. Ödeme tarihi
aynı zamanda 3. şahsa rücu edebilme tarihidir. Bu nedenle işleten ve sürücünün
faizden sorumluluğunun başlangıcının halefiyet başlangıcı olan ödeme tarihi olarak
kabulü gerekir. Bu hale göre sigorta şirketinin sigortalısına ödeme tarihinden
takip tarihine kadar işlemiş faizin hesaplanarak hüküm altına alınması gerekirken
yazılı şekilde karar verilmiş olması isabetli olmamıştır (Yargıtay 17. Hukuk Dairesinin
2013/21198 E. ve 2014/1568 K.sayılı kararı). Açıklanan nedenlerle, davacı vekili
ile davalı vekilinin istinaf başvurusunun kabulü ile HMK''nın 353/1-a/6. maddesi
uyarınca İlk Derece Mahkemesi kararının kaldırılmasına, dosyanın yukarıda belirtilen
şekilde işlem yapılmak üzere mahkemesine gönderilmesine karar verilmiştir.'
- source_sentence: 'Davacı vekili dava dilekçesinde özetle; davalı- borçlu ile müvekkili
arasında, davalı- borçlu tarafından işletilen "..." isimli işletmesinde müvekkil
şirkete ait mamullerin satışı ile ilgili olarak 28/01/2019 tarihli Satış Noktası
Sözleşmesinin imzalandığını, müvekkili olduğunu şirketin sözleşmede kararlaştırılan
bütün edimlerini eksiksiz olarak yerine getirdiğini, kendisinden talep edilen
ürün teslimlerini zamanında yaptığını, ürünlerin müşterilerine sağlıklı bir şekilde
sunulabilmesi için soğutucuların teslim edildiğini, sözleşmede kararlaştırılan
iskontoların uyguladığını, yine sözleşmenin Ek Özel Şartının 5. Maddesi gereğince
yükümlendiği nakit yardımı- kdv dahil 23.600,00-TL ''yi davalıya verdiğini, fakat
davalı- borçlunun şirket sözleşmesinde kararlaştırılan yükümlülüklerini yerine
getirmediğini cari hesap borcunu vadesinde ödemediğini, sözleşmenin özen borcundan
belirtilen aylık olarak en az 84 kasa koli ürün kotasını doldurmadığını, sözleşmede
kararlaştırılan 2000 kasa koli ürün kotasını doldurmadan ürün alımını kestiğini,
davalı- borçlunun sözleşmeye aykırı davranışı nedeniyle nakit yardımının iadesi
ve cari hesap borcu için İzmir ... İcra müdürlüğünün .../... esas sayılı dosyasıyla
ilamsız icra takibini yaptığını davalı- borçlunun söz konusu takibe itiraz etmesi
üzerine takibin durdurulmasına karar verildiğini, yukarıda açıklanan nedenler
ile davalı- borçlunun haksız ve kötüniyetli olarak takibi sürüncemede bırakmak
kastıyla borca ve tüm ferilerine itiraz ettiğini ve takibin durdurulmasına neden
olduğunu, bu nedenle davalı- borçlular aleyhine %20''den az olmamak üzere icra
inkar tazminatına hükmedilmesini, yargılama giderleri ile vekalet ücretini davalı
tarafa yükletilmesini talep etmiştir. '
sentences:
- 'Pay sahiplerinin çağrı
veya gündeme madde konulmasına ilişkin istemleri yönetim kurulu tarafından reddedildiği
veya isteme yedi iş günü içinde olumlu cevap verilmediği takdirde, aynı pay sahiplerinin
başvurusu üzerine, genel kurulun toplantıya çağrılmasına şirket merkezinin bulunduğu
yerdeki asliye ticaret mahkemesi karar verebilir. Mahkeme toplantıya gerek görürse,
gündemi düzenlemek ve Kanun hükümleri uyarınca çağrıyı yapmak üzere bir kayyım
atar.
Kararında, kayyımın, görevlerini ve toplantı için gerekli belgeleri hazırlamaya
ilişkin yetkilerini gösterir. Zorunluluk olmadıkça mahkeme dosya üzerinde inceleme
yaparak karar verir. Karar kesindir.'
- ' Alıcı, devraldığı satılanın durumunu işlerin olağan
akışına göre imkân bulunur bulunmaz gözden geçirmek ve satılanda satıcının
sorumluluğunu gerektiren bir ayıp görürse, bunu uygun bir süre içinde ona
bildirmek zorundadır.
Alıcı gözden geçirmeyi ve bildirimde
bulunmayı ihmal ederse, satılanı kabul etmiş sayılır. Ancak, satılanda olağan
bir gözden geçirmeyle ortaya çıkarılamayacak bir ayıp bulunması hâlinde, bu
hüküm uygulanmaz. Bu tür bir ayıbın bulunduğu sonradan anlaşılırsa, hemen
satıcıya bildirilmelidir; bildirilmezse satılan bu ayıpla birlikte kabul edilmiş
sayılır.'
- Davacı vekili dava dilekçesinde özetle; Davacı vekilinin 15.01.2021 harç ikmal
tarihli dava dilekçesinde özetle; müvekkil aleyhine ... İcra Müdürlüğünün
11.01.2021 tarih ... E Sayılı dosyası üzerinden başlatılan haksız takibe konu çeke ilişkin müvekkilin
borçlu olmadığının tespitine, müvekkil aleyhine başlatılan haksız icra takibinin
müvekkil şirketin yetkili hamil olması ve yetkisiz olan davalıya diğer borçlular bakımından ödeme yapılması
durumunda müvekkil alacağını tahsil imkanı tehlikeye gireceğinden (... Kon.
Tekstil Ltd Şti hariç) tüm borçlular bakımından durdurulması yönünden ihtiyati
tedbir kararı verilmesi , takibe konu çekin müvekkil şirkete iade edilmesi talebinde
bulunma gereği hasıl olduğu, müvekkil şirketin faaliyet gösterdiği ... İş ...
Sn Tic. J Blok No 12-13 .../İstanbul adresinde henüz kimliği bilinmeyen kişiler tarafından
Hırsızlık hadisesi meydana geldiği, hırsızlık olayıyla hamili lehtarı müvekkil
şirket olan çekler çalındığı, ... Polis Merkezi Amirlğine şüpheliler şikayet edildiği,
olaya ilişkin ... C. Başsavcılığının ... Soruşturma dosyası üzerinden devam
edildiği, ayrıca ... 1 ATM ... E Sayılı dosyasından Çek zayi nedeniyle çek iptali davası
açıldığı, davaya konu çeklere ilişkin toplam 52.515.76 TL teminat yatırıldığı,
dosyaya konu çeklere ilişkin ödemeden men yasağı kararı verildiği, karar ilgili
bankalara müzekkere ile bildirildiği, ... tarafından düzenlenen ... Bankası /... İstanbul Şb.
... Iban nolu hesaba ait 31.12.2020 keşide tarihli ... nolu 5.000 TL bedelli
çek de nu davaya konu çeklerden biri olduğu, çeke ilişkin ödeme yasağı konulduğu,
İcra takibine dayanak olan çek üzerinde de belirtildiği, konulan kayıtta “çekin karşılığı yoktur
TC ... 1 ATM 11.09.2020 tarih ... E Sayılı yasağı gereğince çek hakkında her
hangi bir işlem yapılmayarak iade edilmiştir” yazılı olduğu, müvekkilin hamili/lehtarı
olduğu çekler ticari ilişkisi olduğu diğer firmalara verilmek üzere cirolu ve
imzalı bir şekilde kasasında muhafaza edilmekte iken kimliği belirsiz kişilerce çalındığı, dolayısıyla
çek üzerindeki yer alan imza müvekkile ait olduğundan icra Hukuk Mahkemesine başvurulmadığı, Zira
İcra Hukuk Mahkemesi dar yetkili olup sadece şekli inceleme yapma yetkisi
mevcut olduğundan davalı aleyhine huzurdaki dava ikame edildiği, hırsızlık suçuna
ilişkin çeklerden bazıları bankalar ile Faktoring kuruluşlarına ibraz edildiğinde
bankalar ve faktöring kuruluşlarınca bilgi verildiği, çek iptaline konu çeklerin henüz davalıya
geçmediği bir zaman diliminde ciro zincirinde davalının üstünde yer alan
... Kon Tekstil Ltd Şti’ce bankalara ve faktöring firmalarına ibraz edilmeye çalışıldığı,
bunun öğrenilmesi ile ... C. Başsavcılığına ... Sayılı dosyası talepte bulunulduğu,
31.11.2020 tarihinde Savcılık şirketin eski ortağı ... dinlenilmesi için müzekkere
yazıldığı, ancak bu kişi henüz dinelemediği, müvekkil ile ... Kon Ltd Şti arasında
her hangi bir ticari ilişki bulunmadığı, müvekkil davaya konu çeki ... Kon Ltd
Şti’ne ciro edip vermediği, bu nedenle müvekkilden sonra sonra çek üzerindeki ciro
silsilesi bozulduğu, davalı şirkette çek bakımından yetkili hamil sıfatına haiz
olmadığı, müvekkil aleyhine başlatılan haksız icra takibi öncesinde çek iptali davasına
teminat yatırılmış olması nedeniyle teminatsız olarak ve halihazırda icra takibine
konu çekin iptaline ilişkin davanın derdest oluşu ile diğer borçlularca borcun
ödenmesi ihtimaline de müvekkilin alacağını tahsil imkanının tehlike altına
girmesi ihtimaline binaen ... Kon Ltd Şti hariç tüm borçlular bakımından
durdurulması gereği hasıl olduğu, ayrıca davalı tarafından başlatılan icra takibinde borçlu
olan şirketlere yönelik ihtiyati haciz kararı talep edilmiş ve henüz şirketler
aleyhine ihtiyati haciz kararı verilmemişse de müvekkil şirketin haksız ve mesnetsiz şekilde haciz tehdidi
altında olduğu, Davalı tarafça ... ATM ... D.İş sayılı dosyasına henüz
teminat yatırılmamış olup söz konusu teminatın yatırılması halinde davalıya iade
edilmesine muvafakat edilmediği,TTK.792 m. Gereğince çeki kötü niyetli elde bulunduranın çek, geri vermekle
yükümlü olduğu, arz ve izah edilen nedenlerle; müvekkilin çalıntı çeke dayalı
yetkisiz hamil tarafından haksız yere başlatılan İcra takibi nedeniyle zarara
uğramasını önlemek amacıyla ... 1. ATM ... E Sayılı dosyasına teminat yatırılmış
olunması sebebiyle ... İcra Md ... E Sayılı dosyasından başlatılan takibin yargılama
sonuna kadar teminatsız olarak takibin tedbiren durdurulmasına, aksi kanaate
olunur ise; Uygun teminat karşılığında takibin tedbiren durdurulmasına, müvekkilin
çekten kaynaklanan alacağının tahsil imkanının tehlike altına girmesi ihtimali
kuvvetle muhtemel olması nedeniyle durdurma kararının ... Kon Ltd Şti hariç tüm
borçlular adına verilmesini, TTK.792 gereğince müvekkilin yetkili hamil olduğu çekin iadesine, yargılama
giderleri, vekalet ücretinin davalıya yüklenmesine, davalı aleyhine %20 tazminata
hükmedilmesine karar verilmesi talep ve dava etmiştir.
- source_sentence: answers-forums
sentences:
- '1017'
- main-forums
- '1.80'
- Pek çok çocuk, ödülle motive olmak yerine, kontrol altında olmaktan motive olur.
- Bir olasılık, ev işleri için ödül (ler) i belirleme amacını taşıyan bir aile toplantısı
yapmaktır.
- '2015'
model-index:
- name: SentenceTransformer based on Supabase/gte-small
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts dev
type: sts-dev
metrics:
- type: pearson_cosine
value: 0.42703730702392106
name: Pearson Cosine
- type: spearman_cosine
value: 0.434696021205193
name: Spearman Cosine
---
# SentenceTransformer based on Supabase/gte-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Supabase/gte-small](https://huggingface.co/Supabase/gte-small) on the [all-nli-pair-class](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1), [stsb](https://huggingface.co/datasets/emrecan/all-nli-tr) and [x1saint](https://huggingface.co/datasets/x1saint/sts) datasets. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Supabase/gte-small](https://huggingface.co/Supabase/gte-small) <!-- at revision 93b36ff09519291b77d6000d2e86bd8565378086 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- [all-nli-pair-class](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1)
- [stsb](https://huggingface.co/datasets/emrecan/all-nli-tr)
- [x1saint](https://huggingface.co/datasets/x1saint/sts)
- **Language:** tr
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("x1saint/gte-small-tr")
# Run inference
sentences = [
'answers-forums',
'2015',
'1017',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.427 |
| **spearman_cosine** | **0.4347** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
#### all-nli-pair-class
* Dataset: [all-nli-pair-class](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1) at [67baa14](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1/tree/67baa141cf4f6634c983d77eea193c5535611e5a)
* Size: 474,283 training samples
* Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | premise | hypothesis | label |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 19 tokens</li><li>mean: 419.29 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 40 tokens</li><li>mean: 401.34 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>0: ~40.80%</li><li>1: ~42.60%</li><li>2: ~16.60%</li></ul> |
* Samples:
| premise | hypothesis | label |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Davacı tarafından davalı aleyhine açılan İtirazın İptali davasının mahkememizde yapılan açık yargılaması sonunda dosya incelendi. AÇILAN DAVA VE İDDİA :Davacı vekilinin dava dilekçesinde özetle; Müvekkilinin EPDK'dan (Enerji Piyasası DenetlemeKurumu) aldığı onay ile Eylül 2012 den bu yana tüm Türkiye'de elektrik enerjisi tedariki ve toptan satış hizmeti sunduğunu, davalıdan da davacı şirket ile akdettiği sözleşmeye binaen müvekkili şirkketten satın aldığı elektrik ödemelerini aksattıığı düzenlenen faturaları ödemedğinden temerrüde düştüğünü, davacı tarafından defalarca uyarılmasına rağmen de borcunu ödemedeğini bunün üzerine müvekkili İstanbul ... İcra müdürlüğünün ... Esas sayılı dosyasıda ilamsız icra takibi başlattığını davalının borca kötü niyetli olarak itiraz ettiğini ve takibin durduğunu itirazın iptali ile takibin devamına davalı hakkında haksız ve kötü niyetli irizları nedeniyle %20 den aşağı olmamak üzere icra inkar tazminatına hükmedilmesine ve yargılama gideri ile vekale...</code> | <code>Davacı vekili dava dilekçesinde özetle;Müvekkili ...'a karşı halihazırda 17/07/2018'de açılmış .... İcra Dairesi'nde ... Esas Sayılı dosya ile devam eden bir icra dosyası bulunduğunu, bu icra dosyası kapsamında 12/11/2018'den beri müvekkilinin maaşına haciz uygulandığını, dosya ödeme emrinde dosyanın dayanağı, "(Kredi kartı borcu) .... İcra-... Esas dosyalarından kaynaklanan alacağın takipte ve tahsilde tekerrür olmamak üzere tahsili talebidir." şeklinde yazıldığını, müvekkili ...'ın, 2003 yılında kimliğinin çalınarak bazı bankacılık ve telefon işlemlerinde kullanıldığını, adına kredi çekildiğini, kredi kartı çıkarıldığını, telefon hattı açıldığını ve o dönemde bu konuda şikayette bulunduğunu, ... Cumhuriyet Başsavcılığı'nca 28/01/2004 suç tarihli ... soruşturma numaralı dosyasına ulaşıldığını, bu dosyada, müvekkilinin şüpheli olarak görünmekte iken şikayetçi ...A.Ş.' olduğunu, yapılan soruşturma sonucunda gerçek şüpheli şahısların ortaya çıkarılamadığı, fakat müvekkilinin suçlu olmad...</code> | <code>0</code> |
| <code>Davacı vekili dava dilekçesinde özetle; müvekkili şirket tarafından,----işbu sözleşmeye istinaden düzenlenen ---- ait alüminyum levha emtiasının, davalı taşıyıcı şirket tarafından, ---- tarihinde, dava dışı sigortalı firmanın ------ fabrikasından yüklenildiğini, davalı taşıyıcı firmanın sorumluluğunda, --- nakli gerçekleşen toplam ---; net ağırlığı --- uygun ambalajlar ile nakledilen emtiaların, gümrük işlemleri sonrası--- alıcı şirket tarafından --- tarihinde teslim alındığı ancak teslim esnasında ------paket no’lu levhaların ıslanması sebebi ile emtianın hasara uğramış olduğu tespit edilerek taşıma senedine ihtirazi kayıt düşüldüğü ve bu levhaların hurda edilmek üzere ayrıldığını, davalı taşıyıcı şirketin sorumluluk sahasında gerçekleşen işbu hasar sonrası, bağımsız ve uzman eksper tarafından yapılan incelemelere istinaden tanzim edilmiş olan ekspertiz raporunda; hasar nedeninin, emtianın taşıyıcının sorumluluğunda bulunduğu esnada ıslanarak hasara uğramış olmasından, ıslanan paketi...</code> | <code>Davacı vekili dava dilekçesinde özetle; Müvekkili------- ------------- tarihinde davalının------ aracın çarpması nedeniyle hasara uğradığını, meydana gelen kazada davalının %100 kusurlu olduğunu, müvekkili şirket tarafından zarar gören araç için ------ hasar tazminatı ödendiğini, yapılan incelemeler neticesinde davalının sigortacısı olduğu aracın kusurlu olduğunun tespit edildiğini, kaza neticesinde ------ aracın ---- geldiğini, buna göre aracın piyasa değerinin tespit edildiğini ve tespit edilen değerin ------------ tarafından, kalan ------ ise -----tarafından ödendiğini, ayrıca, -----aracın hasarı sırasında ------ kırılması,---- durdurulamaması nedeniyle ------- hasarın tespitinin de ayrıca gerekli hale geldiğini, bu nedenle müvekkili --------- hasarının tespiti için---------------nedeniyle-------- daha ödendiğini, davalının, kusurlu --------------- nedeniyle davalı tarafa başvurulduğunu, davalı tarafın --------- hiçbir gerekçesi olmaksızın ödemediğini, müvekkili şirket tarafından 1....</code> | <code>1</code> |
| <code>Davacı vekili dava dilekçesinde özetle, müvekkili şirketin keşidecisi olduğu ----------------- Taşdelen Şubesine ait, ---- seri numaralı, 17.02.2019 vade tarihli, 50.000,00-TL bedelli çeki lehtara vermek üzere hazırlandığını ancak müvekkili şirket yetkilisinin cüzdanını kaybetmesi suretiyle çeklerin zayi olduğunu, söz konusu çeklerin kötü niyetli üçüncü kişilerin eline geçmesi halinde müvekkilinin mağdur olacağını, bu nedenle ödemeden men talimatı verilmesini ve zayi edilen çekin iptaline dair karar verilmesini talep ve dava etmiştir.</code> | <code>Davacı vekili dava dilekçesinde özetle; ... plakalı araç ... sayılı Genişletilmiş Kasko Sigortası Poliçesi ile müvekkili şirkete, sigortalı olduğunu, hadisenin, 14/06/2017 tarihinde ... plakalı aracın ... ... ... yolu üzerinde seyir halinde iken önünde seyir halinde bulunan sigortalı ... plakalı aracın trafik nedeniyle duraksaması nedeniyle duramayarak çarpması akabinde sigortalı ... plakalı aracın önünde seyir halinde bulunan ... plakalı araca, onun da önünde seyir halinde bulunan ... plakalı araca arkadan çarpması ve bu araçların sırasıyla ... aracın arkaya ... plakalı araca onun da duramayarak ... plakalı araca arkadan çarpması neticesinde çoklu maddi hasarlı trafik kazası meydana gelmiştir, Davalı/Borçlu ... sigortalısı olan ... plakalı aracın, müvekkil şirket sigortalısı olan ... Plakalı araca çarpması neticesinde maddi hasar aldığını, sigortalının, yapmış olduğu başvuru neticesinde Hasar gören sigortalı araca yaptırılan ekspertiz incelemesi sonucunda aracın hasarlı olduğunun tesp...</code> | <code>0</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
#### stsb
* Dataset: [stsb](https://huggingface.co/datasets/emrecan/all-nli-tr) at [daeabfb](https://huggingface.co/datasets/emrecan/all-nli-tr/tree/daeabfbc01f82757ab998bd23ce0ddfceaa5e24d)
* Size: 941,086 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 47.0 tokens</li><li>max: 301 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 25.29 tokens</li><li>max: 103 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.48</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------|:-----------------|
| <code>Kavramsal olarak krem kaymağının iki temel boyutu vardır - ürün ve coğrafya.</code> | <code>Ürün ve coğrafya krem kaymağını işe yarıyor.</code> | <code>0.5</code> |
| <code>Mevsim boyunca ve sanırım senin seviyendeyken onları bir sonraki seviyeye düşürürsün. Eğer ebeveyn takımını çağırmaya karar verirlerse Braves üçlü A'dan birini çağırmaya karar verirlerse çifte bir adam onun yerine geçmeye gider ve bekar bir adam gelir.</code> | <code>Eğer insanlar hatırlarsa, bir sonraki seviyeye düşersin.</code> | <code>1.0</code> |
| <code>Numaramızdan biri talimatlarınızı birazdan yerine getirecektir.</code> | <code>Ekibimin bir üyesi emirlerinizi büyük bir hassasiyetle yerine getirecektir.</code> | <code>1.0</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
#### x1saint
* Dataset: [x1saint](https://huggingface.co/datasets/x1saint/sts) at [85ac563](https://huggingface.co/datasets/x1saint/sts/tree/85ac563a90a8b801479ac1bc689b743574bb0e90)
* Size: 1,523 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 42.14 tokens</li><li>max: 353 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 40.23 tokens</li><li>max: 172 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.69</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-----------------|
| <code>George Orwell, 1903 yılında Hindistan'ın Bengal bölgesinde doğdu.</code> | <code>George Orwell, Montihari şehrinde doğmuştur.</code> | <code>0.8</code> |
| <code>Orwell, Eton College'de eğitimini tamamladı.</code> | <code>Orwell öğrenimini Eton College'de bitirdi.</code> | <code>1.0</code> |
| <code>George Orwell, İngiltere yönetimine karşı çıkarak Hindistan Polisi görevinden istifa etti.</code> | <code>Orwell, İmparatorluk yönetiminin iç yüzünü görünce istifayı tercih etti.</code> | <code>0.8</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Evaluation Datasets
#### all-nli-pair-class
* Dataset: [all-nli-pair-class](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1) at [67baa14](https://huggingface.co/datasets/Turkish-NLI/legal_nli_TR_V1/tree/67baa141cf4f6634c983d77eea193c5535611e5a)
* Size: 5,000 evaluation samples
* Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | premise | hypothesis | label |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 74 tokens</li><li>mean: 420.94 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 406.85 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>0: ~44.30%</li><li>1: ~39.00%</li><li>2: ~16.70%</li></ul> |
* Samples:
| premise | hypothesis | label |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Davacı vekili dava dilekçesinde özetle; Davacı şirketin taşıyan sıfatıyla davalı şirkete ait yükü kendisi ile yapılan taşıma sözleşmesi uyarınca ... Limanından ... tarihinde yükleyerek .../ ... Limanı’na taşıdığını ve yükü ihtiva eden 3 adet konteyneri liman sahasına kapalı ve mühürlü olarak ... tarihinde gemiden tahliye ettiğini, ... numaralı konişmentoda belirtildiği üzere, söz konusu deniz taşıma işinde davacı şirkete ait ‘...’ numaralı 3 adet konteynerin kullanıldığını, taşıma konusu yüklere ilişkin varış ihbarlarının düzenlendiğini ve yüklerin tahliye edildiğini, bugüne dek söz konusu yüklerin teslim alınmadığını, yüklerin konişmentolarda öngörülen süre içerisinde gönderilen tarafından teslim alınmaması nedeniyle, davacı şirket tarafından yapılan bütün iyiniyetli girişimlerin sonuçsuz kaldığını, aradan geçen yaklaşık 11 aylık süre zarfında yükün teslim alınmadığını, konteynerlerin tahliye edilmediğini, konteynerlerin tahliye edilmemesi üzerine davacı taşıyan şirket çalışanı tarafı...</code> | <code>Davacı vekili dava dilekçesinde özetle; Davalı tarafın taşıyan müvekkili ... A/Ş vasıtası ile ... numaralı konişmento tahtında ... numaralı 1 adet 40'lık REEFER tip konteyner muhteviyatı yükünü Hindistan'ın Cochin Limanından Gemlik Limanı' na denizyolu ile taşıttığını, bu taşımalarda davalı yanın ithalatçı ve taşımaya ilişkin konişmentoya göre yük alıcısı konumunda olduğunu, davalının ithalatçısı ve yük alıcısı olduğu ... numaralı konişmento tahtında taşınan 1 adet 40 'lık reefer konteynerin yükleme limanı olan Hindistan' in Cochin Limanı' nda 11.07.2017 tarihinde gemiye yüklendiğini ve 28.08.2017 tarihinde Gemlik ... Limanı' nda gemiden tahliye edildiğini, davalının ... numaralı konişmento tahtında taşman emtiaları tahliye limanı olan Gemlik Limanı' na ulaşmadan önce davalıya bir örneği delil listelerinde sunulan "..." yani "Varış İhbarnamesi" gönderildiği ve davalının yükünün 28.08.2017 tarihinde Gemlik Limanı' na ulaşacağının ihbar edildiğini, tahliye limanındaki konteyner muhtevi...</code> | <code>1</code> |
| <code> Davacı vekili dava dilekçesinde özetle; Davacı ... A.Ş.'nin 1986 yılından beri Irak piyasasında iş yapan ve gerek iş ahlakı ve gerekse dürüstlüğüyle tanınan ve dolayısıyla Irak'ta yapılacak yeni bir iş olduğunda, ilk haberdar edilen bir firma olduğunu, 1989 yılında da İrak'a daimi ofisini açtığını, 2001 yılında ilgili bakanlığın davacı şirketten Saf Bakır Şerit talebinde bulunduğunu, davacının da bunu temin etmek için davalı şirketle ilişki kurduğunu, davalı şirketin Irak'ın talep ettiği spesifikasyonda mal üretecek araca sahip bulunmadığını beyan etmesi üzerine, davacı şirketin bu konuda da yardımcı olduğunu ve üretimi gerçekleştirecek makinelerin davalı tarafından teminine hem teknolojik bilgi ve hem de maddi katkıda bulunduğunu, böylelikle ilk olarak 2002 yılında, davalının ürettiği malların davacı şirket tarafından Irak'a pazarlandığını, bu arada Amerika Irak'ı istila edince, ilişkilerin bir süre askıda kaldığını ve nihayet 2006 yılında Irak Sanayi Bakanlığı'nın davacı şirketi yen...</code> | <code>Haksız rekabete ilişkin<br>bu Kısım hükümlerinin amacı, bütün katılanların menfaatine, dürüst ve bozulmamış<br>rekabetin sağlanmasıdır.Rakipler arasında veya tedarik edenlerle müşteriler<br>arasındaki ilişkileri etkileyen aldatıcı veya dürüstlük kuralına diğer şekillerdeki<br>aykırı davranışlar ile ticari uygulamalar haksız ve hukuka aykırıdır.</code> | <code>2</code> |
| <code> Davacı vekili dava dilekçesinde özetle; Müvekkili şirketin perakende sektöründe ağırlıklı olarak elektronik cihazların satışı işiyle iştigal ettiğini ve tüketiciler tarafından çeşitli şikayetlerle kendisine teslim edilen ürünleri, teknik servis olarak faaliyet gösteren belirli şirketlere onarım için yönlendirdiğini, bu lojistik faaliyetlerin zaman zaman, kargo şirketi olarak faaliyet gösteren davalı taraf ile gerçekleştirildiğini, ... A.Ş.'nin, müvekkili şirketin ticari ilişkileri kapsamında belirli ürünlerini teslim ettiği bir yetkili teknik servis olarak faaliyet gösterdiğini ve belirli cihazları onarım için teslim aldıktan sonra yine müvekkili şirkete teslim ettiğini, bu operasyonların dış lojistik tarafının da ...'nin anlaşmalı olduğu kargo şirketi olan davalı taraf ile gerçekleştirildiğini, bu ticari ilişki sebebi ile yedi adet cep telefonun da onarım için ...’ne gönderildiğini ve ...’nde işleme tabi tutulan 7 adet telefonların gönderici sıfatı ile ... tarafından müvekkili şirket...</code> | <code>Zarara, kasten veya<br>pervasızca bir davranışla ve böyle bir zararın meydana gelmesi ihtimalinin bilinciyle<br>işlenmiş bir fiilinin veya ihmalinin sebebiyet verdiği ispat edilen taşıyıcı veya<br>879 uncu maddede belirtilen kişiler, bu Kısımda öngörülen sorumluluktan kurtulma<br>hâllerinden ve sorumluluk sınırlamalarından yararlanamaz.</code> | <code>2</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#softmaxloss)
#### stsb
* Dataset: [stsb](https://huggingface.co/datasets/figenfikri/stsb_tr) at [bb7685b](https://huggingface.co/datasets/figenfikri/stsb_tr/tree/bb7685bff798ac1ed07d8cd08e5df43eaaeba2ee)
* Size: 1,500 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 45.29 tokens</li><li>max: 304 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 24.86 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------|:-----------------|
| <code>Yeni haklar yeterince güzel.</code> | <code>Herkes gerçekten en yeni faydaları seviyor</code> | <code>0.5</code> |
| <code>Bu site, tüm ödül kazananların bir listesini ve Hükümet Yönetici makalelerinin aranabilir bir veritabanını içerir.</code> | <code>Web sitesinde yer alan Hükümet Yürütme makaleleri aranamaz.</code> | <code>0.0</code> |
| <code>Bilemiyorum. Onunla ilgili karışık duygularım var. Bazen ondan hoşlanıyorum ama aynı zamanda birisinin onu dövmesini görmeyi seviyorum.</code> | <code>Çoğunlukla ondan hoşlanıyorum, ama yine de birinin onu dövdüğünü görmekten zevk alıyorum.</code> | <code>1.0</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
#### x1saint
* Dataset: [x1saint](https://huggingface.co/datasets/figenfikri/stsb_tr) at [bb7685b](https://huggingface.co/datasets/figenfikri/stsb_tr/tree/bb7685bff798ac1ed07d8cd08e5df43eaaeba2ee)
* Size: 1,500 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 45.29 tokens</li><li>max: 304 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 24.86 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------|:-----------------|
| <code>Yeni haklar yeterince güzel.</code> | <code>Herkes gerçekten en yeni faydaları seviyor</code> | <code>0.5</code> |
| <code>Bu site, tüm ödül kazananların bir listesini ve Hükümet Yönetici makalelerinin aranabilir bir veritabanını içerir.</code> | <code>Web sitesinde yer alan Hükümet Yürütme makaleleri aranamaz.</code> | <code>0.0</code> |
| <code>Bilemiyorum. Onunla ilgili karışık duygularım var. Bazen ondan hoşlanıyorum ama aynı zamanda birisinin onu dövmesini görmeyi seviyorum.</code> | <code>Çoğunlukla ondan hoşlanıyorum, ama yine de birinin onu dövdüğünü görmekten zevk alıyorum.</code> | <code>1.0</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 1e-06
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | all-nli-pair-class loss | stsb loss | x1saint loss | sts-dev_spearman_cosine |
|:------:|:-----:|:-------------:|:-----------------------:|:---------:|:------------:|:-----------------------:|
| 0.0011 | 100 | 3.5189 | - | - | - | - |
| 0.0023 | 200 | 3.0711 | - | - | - | - |
| 0.0011 | 100 | 3.5187 | - | - | - | - |
| 0.0023 | 200 | 3.0709 | - | - | - | - |
| 0.0034 | 300 | 3.2458 | - | - | - | - |
| 0.0045 | 400 | 3.1891 | - | - | - | - |
| 0.0056 | 500 | 3.3556 | - | - | - | - |
| 0.0068 | 600 | 3.4514 | - | - | - | - |
| 0.0079 | 700 | 3.2443 | - | - | - | - |
| 0.0090 | 800 | 3.2109 | - | - | - | - |
| 0.0102 | 900 | 3.4956 | - | - | - | - |
| 0.0113 | 1000 | 3.4255 | 1.0730 | 4.5456 | 4.5456 | 0.2466 |
| 0.0124 | 1100 | 3.1637 | - | - | - | - |
| 0.0136 | 1200 | 3.2261 | - | - | - | - |
| 0.0147 | 1300 | 3.3524 | - | - | - | - |
| 0.0158 | 1400 | 3.4991 | - | - | - | - |
| 0.0169 | 1500 | 3.5157 | - | - | - | - |
| 0.0181 | 1600 | 3.5079 | - | - | - | - |
| 0.0192 | 1700 | 3.2644 | - | - | - | - |
| 0.0203 | 1800 | 3.2737 | - | - | - | - |
| 0.0215 | 1900 | 3.5461 | - | - | - | - |
| 0.0226 | 2000 | 3.6754 | 1.0257 | 4.5012 | 4.5012 | 0.2563 |
| 0.0237 | 2100 | 3.414 | - | - | - | - |
| 0.0248 | 2200 | 3.0237 | - | - | - | - |
| 0.0260 | 2300 | 3.383 | - | - | - | - |
| 0.0271 | 2400 | 3.2955 | - | - | - | - |
| 0.0282 | 2500 | 3.0388 | - | - | - | - |
| 0.0294 | 2600 | 3.2 | - | - | - | - |
| 0.0305 | 2700 | 3.3309 | - | - | - | - |
| 0.0316 | 2800 | 3.0292 | - | - | - | - |
| 0.0327 | 2900 | 2.9697 | - | - | - | - |
| 0.0339 | 3000 | 2.8957 | 0.9897 | 4.4610 | 4.4610 | 0.2651 |
| 0.0350 | 3100 | 3.3987 | - | - | - | - |
| 0.0361 | 3200 | 3.0995 | - | - | - | - |
| 0.0373 | 3300 | 3.1995 | - | - | - | - |
| 0.0384 | 3400 | 3.4175 | - | - | - | - |
| 0.0395 | 3500 | 3.1195 | - | - | - | - |
| 0.0407 | 3600 | 3.1149 | - | - | - | - |
| 0.0418 | 3700 | 3.2614 | - | - | - | - |
| 0.0429 | 3800 | 3.3849 | - | - | - | - |
| 0.0440 | 3900 | 3.3391 | - | - | - | - |
| 0.0452 | 4000 | 3.1803 | 0.9553 | 4.4195 | 4.4195 | 0.2719 |
| 0.0463 | 4100 | 3.0133 | - | - | - | - |
| 0.0474 | 4200 | 3.3885 | - | - | - | - |
| 0.0486 | 4300 | 3.132 | - | - | - | - |
| 0.0497 | 4400 | 3.2 | - | - | - | - |
| 0.0508 | 4500 | 3.3284 | - | - | - | - |
| 0.0519 | 4600 | 3.1747 | - | - | - | - |
| 0.0531 | 4700 | 3.1531 | - | - | - | - |
| 0.0542 | 4800 | 3.3195 | - | - | - | - |
| 0.0553 | 4900 | 3.0077 | - | - | - | - |
| 0.0565 | 5000 | 2.7127 | 0.8501 | 4.3839 | 4.3839 | 0.2808 |
| 0.0576 | 5100 | 3.2574 | - | - | - | - |
| 0.0587 | 5200 | 3.3916 | - | - | - | - |
| 0.0598 | 5300 | 3.0803 | - | - | - | - |
| 0.0610 | 5400 | 3.3637 | - | - | - | - |
| 0.0621 | 5500 | 3.4361 | - | - | - | - |
| 0.0632 | 5600 | 3.4658 | - | - | - | - |
| 0.0644 | 5700 | 3.1167 | - | - | - | - |
| 0.0655 | 5800 | 3.3059 | - | - | - | - |
| 0.0666 | 5900 | 3.1765 | - | - | - | - |
| 0.0678 | 6000 | 3.2381 | 0.7268 | 4.3579 | 4.3579 | 0.2943 |
| 0.0689 | 6100 | 3.0319 | - | - | - | - |
| 0.0700 | 6200 | 3.2476 | - | - | - | - |
| 0.0711 | 6300 | 2.9789 | - | - | - | - |
| 0.0723 | 6400 | 3.1056 | - | - | - | - |
| 0.0734 | 6500 | 3.2808 | - | - | - | - |
| 0.0745 | 6600 | 2.9506 | - | - | - | - |
| 0.0757 | 6700 | 2.8923 | - | - | - | - |
| 0.0768 | 6800 | 3.0534 | - | - | - | - |
| 0.0779 | 6900 | 3.0781 | - | - | - | - |
| 0.0790 | 7000 | 3.3438 | 0.6398 | 4.3437 | 4.3437 | 0.3081 |
| 0.0802 | 7100 | 3.2635 | - | - | - | - |
| 0.0813 | 7200 | 3.2018 | - | - | - | - |
| 0.0824 | 7300 | 2.8889 | - | - | - | - |
| 0.0836 | 7400 | 3.4046 | - | - | - | - |
| 0.0847 | 7500 | 3.4731 | - | - | - | - |
| 0.0858 | 7600 | 3.1368 | - | - | - | - |
| 0.0869 | 7700 | 2.9244 | - | - | - | - |
| 0.0881 | 7800 | 3.1948 | - | - | - | - |
| 0.0892 | 7900 | 3.2156 | - | - | - | - |
| 0.0903 | 8000 | 2.9844 | 0.5916 | 4.3358 | 4.3358 | 0.3234 |
| 0.0915 | 8100 | 2.8774 | - | - | - | - |
| 0.0926 | 8200 | 2.5593 | - | - | - | - |
| 0.0937 | 8300 | 2.8402 | - | - | - | - |
| 0.0949 | 8400 | 3.0853 | - | - | - | - |
| 0.0960 | 8500 | 3.2655 | - | - | - | - |
| 0.0971 | 8600 | 3.1169 | - | - | - | - |
| 0.0982 | 8700 | 3.2144 | - | - | - | - |
| 0.0994 | 8800 | 2.8349 | - | - | - | - |
| 0.1005 | 8900 | 2.9291 | - | - | - | - |
| 0.1016 | 9000 | 2.7601 | 0.5400 | 4.3210 | 4.3210 | 0.3397 |
| 0.1028 | 9100 | 2.8425 | - | - | - | - |
| 0.1039 | 9200 | 3.0608 | - | - | - | - |
| 0.1050 | 9300 | 3.1085 | - | - | - | - |
| 0.1061 | 9400 | 2.9238 | - | - | - | - |
| 0.1073 | 9500 | 2.9525 | - | - | - | - |
| 0.1084 | 9600 | 3.3401 | - | - | - | - |
| 0.1095 | 9700 | 2.9262 | - | - | - | - |
| 0.1107 | 9800 | 3.1004 | - | - | - | - |
| 0.1118 | 9900 | 2.5464 | - | - | - | - |
| 0.1129 | 10000 | 3.1688 | 0.4847 | 4.3110 | 4.3110 | 0.3512 |
| 0.1141 | 10100 | 3.1941 | - | - | - | - |
| 0.1152 | 10200 | 3.0643 | - | - | - | - |
| 0.1163 | 10300 | 2.8023 | - | - | - | - |
| 0.1174 | 10400 | 3.3176 | - | - | - | - |
| 0.1186 | 10500 | 3.162 | - | - | - | - |
| 0.1197 | 10600 | 3.0185 | - | - | - | - |
| 0.1208 | 10700 | 3.0583 | - | - | - | - |
| 0.1220 | 10800 | 3.2895 | - | - | - | - |
| 0.1231 | 10900 | 2.8879 | - | - | - | - |
| 0.1242 | 11000 | 3.135 | 0.4262 | 4.3080 | 4.3080 | 0.3620 |
| 0.1253 | 11100 | 3.1176 | - | - | - | - |
| 0.1265 | 11200 | 3.0155 | - | - | - | - |
| 0.1276 | 11300 | 3.0035 | - | - | - | - |
| 0.1287 | 11400 | 3.0159 | - | - | - | - |
| 0.1299 | 11500 | 2.8225 | - | - | - | - |
| 0.1310 | 11600 | 2.9968 | - | - | - | - |
| 0.1321 | 11700 | 2.9152 | - | - | - | - |
| 0.1332 | 11800 | 3.0774 | - | - | - | - |
| 0.1344 | 11900 | 3.2168 | - | - | - | - |
| 0.1355 | 12000 | 2.7994 | 0.3985 | 4.2907 | 4.2907 | 0.3715 |
| 0.1366 | 12100 | 3.1756 | - | - | - | - |
| 0.1378 | 12200 | 3.3252 | - | - | - | - |
| 0.1389 | 12300 | 3.0435 | - | - | - | - |
| 0.1400 | 12400 | 3.0718 | - | - | - | - |
| 0.1412 | 12500 | 3.121 | - | - | - | - |
| 0.1423 | 12600 | 3.2819 | - | - | - | - |
| 0.1434 | 12700 | 3.0131 | - | - | - | - |
| 0.1445 | 12800 | 3.3347 | - | - | - | - |
| 0.1457 | 12900 | 3.228 | - | - | - | - |
| 0.1468 | 13000 | 2.9512 | 0.3903 | 4.2888 | 4.2888 | 0.3793 |
| 0.1479 | 13100 | 3.0776 | - | - | - | - |
| 0.1491 | 13200 | 2.9721 | - | - | - | - |
| 0.1502 | 13300 | 2.8265 | - | - | - | - |
| 0.1513 | 13400 | 2.9286 | - | - | - | - |
| 0.1524 | 13500 | 2.7661 | - | - | - | - |
| 0.1536 | 13600 | 2.8168 | - | - | - | - |
| 0.1547 | 13700 | 3.1262 | - | - | - | - |
| 0.1558 | 13800 | 3.1392 | - | - | - | - |
| 0.1570 | 13900 | 3.1336 | - | - | - | - |
| 0.1581 | 14000 | 3.1258 | 0.3315 | 4.2807 | 4.2807 | 0.3860 |
| 0.1592 | 14100 | 3.0987 | - | - | - | - |
| 0.1603 | 14200 | 2.7666 | - | - | - | - |
| 0.1615 | 14300 | 3.0599 | - | - | - | - |
| 0.1626 | 14400 | 3.1154 | - | - | - | - |
| 0.1637 | 14500 | 3.1234 | - | - | - | - |
| 0.1649 | 14600 | 3.025 | - | - | - | - |
| 0.1660 | 14700 | 3.0224 | - | - | - | - |
| 0.1671 | 14800 | 2.922 | - | - | - | - |
| 0.1683 | 14900 | 2.7217 | - | - | - | - |
| 0.1694 | 15000 | 2.7902 | 0.3253 | 4.2890 | 4.2890 | 0.3908 |
| 0.1705 | 15100 | 3.2199 | - | - | - | - |
| 0.1716 | 15200 | 3.1018 | - | - | - | - |
| 0.1728 | 15300 | 2.6536 | - | - | - | - |
| 0.1739 | 15400 | 3.0888 | - | - | - | - |
| 0.1750 | 15500 | 2.728 | - | - | - | - |
| 0.1762 | 15600 | 3.0917 | - | - | - | - |
| 0.1773 | 15700 | 2.9809 | - | - | - | - |
| 0.1784 | 15800 | 2.9921 | - | - | - | - |
| 0.1795 | 15900 | 3.1358 | - | - | - | - |
| 0.1807 | 16000 | 3.1537 | 0.3201 | 4.2816 | 4.2816 | 0.3950 |
| 0.1818 | 16100 | 3.0497 | - | - | - | - |
| 0.1829 | 16200 | 3.014 | - | - | - | - |
| 0.1841 | 16300 | 2.7652 | - | - | - | - |
| 0.1852 | 16400 | 2.809 | - | - | - | - |
| 0.1863 | 16500 | 3.138 | - | - | - | - |
| 0.1874 | 16600 | 2.7983 | - | - | - | - |
| 0.1886 | 16700 | 2.9568 | - | - | - | - |
| 0.1897 | 16800 | 2.9604 | - | - | - | - |
| 0.1908 | 16900 | 3.1076 | - | - | - | - |
| 0.1920 | 17000 | 3.0263 | 0.2751 | 4.2702 | 4.2702 | 0.4003 |
| 0.1931 | 17100 | 3.0295 | - | - | - | - |
| 0.1942 | 17200 | 3.1564 | - | - | - | - |
| 0.1954 | 17300 | 2.8307 | - | - | - | - |
| 0.1965 | 17400 | 3.1378 | - | - | - | - |
| 0.1976 | 17500 | 3.0607 | - | - | - | - |
| 0.1987 | 17600 | 2.8302 | - | - | - | - |
| 0.1999 | 17700 | 2.8098 | - | - | - | - |
| 0.2010 | 17800 | 3.4055 | - | - | - | - |
| 0.2021 | 17900 | 2.7756 | - | - | - | - |
| 0.2033 | 18000 | 3.0922 | 0.2955 | 4.2613 | 4.2613 | 0.4060 |
| 0.2044 | 18100 | 3.161 | - | - | - | - |
| 0.2055 | 18200 | 3.3236 | - | - | - | - |
| 0.2066 | 18300 | 2.6951 | - | - | - | - |
| 0.2078 | 18400 | 2.9456 | - | - | - | - |
| 0.2089 | 18500 | 2.7356 | - | - | - | - |
| 0.2100 | 18600 | 3.0398 | - | - | - | - |
| 0.2112 | 18700 | 2.9493 | - | - | - | - |
| 0.2123 | 18800 | 2.9966 | - | - | - | - |
| 0.2134 | 18900 | 3.3613 | - | - | - | - |
| 0.2146 | 19000 | 2.9626 | 0.2534 | 4.2668 | 4.2668 | 0.4097 |
| 0.2157 | 19100 | 3.0809 | - | - | - | - |
| 0.2168 | 19200 | 2.9583 | - | - | - | - |
| 0.2179 | 19300 | 2.9046 | - | - | - | - |
| 0.2191 | 19400 | 3.4546 | - | - | - | - |
| 0.2202 | 19500 | 3.2281 | - | - | - | - |
| 0.2213 | 19600 | 2.8041 | - | - | - | - |
| 0.2225 | 19700 | 2.7885 | - | - | - | - |
| 0.2236 | 19800 | 2.9419 | - | - | - | - |
| 0.2247 | 19900 | 2.9497 | - | - | - | - |
| 0.2258 | 20000 | 2.8604 | 0.2315 | 4.2608 | 4.2608 | 0.4136 |
| 0.2270 | 20100 | 2.897 | - | - | - | - |
| 0.2281 | 20200 | 3.0587 | - | - | - | - |
| 0.2292 | 20300 | 2.9539 | - | - | - | - |
| 0.2304 | 20400 | 3.0268 | - | - | - | - |
| 0.2315 | 20500 | 2.5965 | - | - | - | - |
| 0.2326 | 20600 | 2.5413 | - | - | - | - |
| 0.2337 | 20700 | 2.975 | - | - | - | - |
| 0.2349 | 20800 | 2.8803 | - | - | - | - |
| 0.2360 | 20900 | 2.8471 | - | - | - | - |
| 0.2371 | 21000 | 2.8503 | 0.2041 | 4.2626 | 4.2626 | 0.4157 |
| 0.2383 | 21100 | 3.0019 | - | - | - | - |
| 0.2394 | 21200 | 2.8871 | - | - | - | - |
| 0.2405 | 21300 | 2.8686 | - | - | - | - |
| 0.2417 | 21400 | 3.0021 | - | - | - | - |
| 0.2428 | 21500 | 2.9747 | - | - | - | - |
| 0.2439 | 21600 | 2.8709 | - | - | - | - |
| 0.2450 | 21700 | 3.0914 | - | - | - | - |
| 0.2462 | 21800 | 3.2664 | - | - | - | - |
| 0.2473 | 21900 | 2.7196 | - | - | - | - |
| 0.2484 | 22000 | 3.1535 | 0.2467 | 4.2663 | 4.2663 | 0.4176 |
| 0.2496 | 22100 | 2.8622 | - | - | - | - |
| 0.2507 | 22200 | 2.9969 | - | - | - | - |
| 0.2518 | 22300 | 2.53 | - | - | - | - |
| 0.2529 | 22400 | 2.4632 | - | - | - | - |
| 0.2541 | 22500 | 3.1082 | - | - | - | - |
| 0.2552 | 22600 | 2.5799 | - | - | - | - |
| 0.2563 | 22700 | 2.8729 | - | - | - | - |
| 0.2575 | 22800 | 2.8414 | - | - | - | - |
| 0.2586 | 22900 | 2.8917 | - | - | - | - |
| 0.2597 | 23000 | 2.6811 | 0.2159 | 4.2583 | 4.2583 | 0.4209 |
| 0.2608 | 23100 | 3.0415 | - | - | - | - |
| 0.2620 | 23200 | 2.8393 | - | - | - | - |
| 0.2631 | 23300 | 3.2675 | - | - | - | - |
| 0.2642 | 23400 | 2.8109 | - | - | - | - |
| 0.2654 | 23500 | 3.2762 | - | - | - | - |
| 0.2665 | 23600 | 3.0291 | - | - | - | - |
| 0.2676 | 23700 | 3.0371 | - | - | - | - |
| 0.2688 | 23800 | 2.5999 | - | - | - | - |
| 0.2699 | 23900 | 3.1188 | - | - | - | - |
| 0.2710 | 24000 | 2.548 | 0.2729 | 4.2453 | 4.2453 | 0.4242 |
| 0.2721 | 24100 | 2.8282 | - | - | - | - |
| 0.2733 | 24200 | 2.872 | - | - | - | - |
| 0.2744 | 24300 | 2.6728 | - | - | - | - |
| 0.2755 | 24400 | 3.229 | - | - | - | - |
| 0.2767 | 24500 | 2.6548 | - | - | - | - |
| 0.2778 | 24600 | 2.9694 | - | - | - | - |
| 0.2789 | 24700 | 2.6256 | - | - | - | - |
| 0.2800 | 24800 | 3.0095 | - | - | - | - |
| 0.2812 | 24900 | 3.2991 | - | - | - | - |
| 0.2823 | 25000 | 2.7506 | 0.2124 | 4.2584 | 4.2584 | 0.4249 |
| 0.2834 | 25100 | 2.7212 | - | - | - | - |
| 0.2846 | 25200 | 3.1904 | - | - | - | - |
| 0.2857 | 25300 | 2.9579 | - | - | - | - |
| 0.2868 | 25400 | 3.0365 | - | - | - | - |
| 0.2880 | 25500 | 3.053 | - | - | - | - |
| 0.2891 | 25600 | 2.9033 | - | - | - | - |
| 0.2902 | 25700 | 2.6707 | - | - | - | - |
| 0.2913 | 25800 | 2.8541 | - | - | - | - |
| 0.2925 | 25900 | 3.047 | - | - | - | - |
| 0.2936 | 26000 | 2.5607 | 0.2063 | 4.2468 | 4.2468 | 0.4281 |
| 0.2947 | 26100 | 2.9208 | - | - | - | - |
| 0.2959 | 26200 | 2.8091 | - | - | - | - |
| 0.2970 | 26300 | 3.5143 | - | - | - | - |
| 0.2981 | 26400 | 2.5564 | - | - | - | - |
| 0.2992 | 26500 | 2.8665 | - | - | - | - |
| 0.3004 | 26600 | 2.5691 | - | - | - | - |
| 0.3015 | 26700 | 2.5526 | - | - | - | - |
| 0.3026 | 26800 | 2.7084 | - | - | - | - |
| 0.3038 | 26900 | 3.1267 | - | - | - | - |
| 0.3049 | 27000 | 2.4162 | 0.1569 | 4.2439 | 4.2439 | 0.4296 |
| 0.3060 | 27100 | 2.5168 | - | - | - | - |
| 0.3071 | 27200 | 3.0819 | - | - | - | - |
| 0.3083 | 27300 | 3.0642 | - | - | - | - |
| 0.3094 | 27400 | 3.2743 | - | - | - | - |
| 0.3105 | 27500 | 2.7929 | - | - | - | - |
| 0.3117 | 27600 | 2.8661 | - | - | - | - |
| 0.3128 | 27700 | 2.9403 | - | - | - | - |
| 0.3139 | 27800 | 2.8967 | - | - | - | - |
| 0.3151 | 27900 | 2.8949 | - | - | - | - |
| 0.3162 | 28000 | 2.9087 | 0.1647 | 4.2450 | 4.2450 | 0.4316 |
| 0.3173 | 28100 | 2.7417 | - | - | - | - |
| 0.3184 | 28200 | 3.0461 | - | - | - | - |
| 0.3196 | 28300 | 2.747 | - | - | - | - |
| 0.3207 | 28400 | 2.8057 | - | - | - | - |
| 0.3218 | 28500 | 3.0305 | - | - | - | - |
| 0.3230 | 28600 | 3.1517 | - | - | - | - |
| 0.3241 | 28700 | 2.9611 | - | - | - | - |
| 0.3252 | 28800 | 2.7057 | - | - | - | - |
| 0.3263 | 28900 | 2.5268 | - | - | - | - |
| 0.3275 | 29000 | 2.9869 | 0.2016 | 4.2455 | 4.2455 | 0.4334 |
| 0.3286 | 29100 | 3.2638 | - | - | - | - |
| 0.3297 | 29200 | 2.8948 | - | - | - | - |
| 0.3309 | 29300 | 3.0118 | - | - | - | - |
| 0.3320 | 29400 | 2.8534 | - | - | - | - |
| 0.3331 | 29500 | 3.1632 | - | - | - | - |
| 0.3342 | 29600 | 2.9116 | - | - | - | - |
| 0.3354 | 29700 | 2.5557 | - | - | - | - |
| 0.3365 | 29800 | 2.7745 | - | - | - | - |
| 0.3376 | 29900 | 2.5932 | - | - | - | - |
| 0.3388 | 30000 | 2.7092 | 0.1921 | 4.2458 | 4.2458 | 0.4347 |
| 0.3399 | 30100 | 3.2183 | - | - | - | - |
| 0.3410 | 30200 | 2.857 | - | - | - | - |
| 0.3422 | 30300 | 2.9008 | - | - | - | - |
| 0.3433 | 30400 | 2.8235 | - | - | - | - |
| 0.3444 | 30500 | 2.6956 | - | - | - | - |
| 0.3455 | 30600 | 2.9611 | - | - | - | - |
| 0.3467 | 30700 | 3.1242 | - | - | - | - |
| 0.3478 | 30800 | 3.1466 | - | - | - | - |
| 0.3489 | 30900 | 2.8542 | - | - | - | - |
| 0.3501 | 31000 | 2.8809 | - | - | - | - |
</details>
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.0
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers and SoftmaxLoss
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] |
Non_BioNLP
|
tablane/distilbert-base-uncased.finetuned-emotion
|
tablane
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,706,383,278,000 | 2024-01-27T19:31:11 | 3 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased.finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.924
name: Accuracy
- type: f1
value: 0.9240046085344084
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased.finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2149
- Accuracy: 0.924
- F1: 0.9240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.814 | 1.0 | 250 | 0.3135 | 0.903 | 0.9013 |
| 0.2487 | 2.0 | 500 | 0.2149 | 0.924 | 0.9240 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
nbogdan/flant5-xl-2ex-paraphrasing-3epochs
|
nbogdan
| null |
[
"adapter-transformers",
"t5",
"adapterhub:self-explanations",
"dataset:self-explanations",
"region:us"
] | 1,693,934,005,000 | 2023-09-05T17:13:47 | 0 | 0 |
---
datasets:
- self-explanations
tags:
- adapter-transformers
- t5
- adapterhub:self-explanations
---
# Adapter `nbogdan/flant5-xl-2ex-paraphrasing-3epochs` for google/flan-t5-xl
An [adapter](https://adapterhub.ml) for the `google/flan-t5-xl` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library.
## Usage
First, install `adapter-transformers`:
```
pip install -U adapter-transformers
```
_Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_
Now, the adapter can be loaded and activated like this:
```python
from transformers import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("google/flan-t5-xl")
adapter_name = model.load_adapter("nbogdan/flant5-xl-2ex-paraphrasing-3epochs", source="hf", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here -->
|
[
"PARAPHRASING"
] |
Non_BioNLP
|
Gopal2002/setfit_zeon
|
Gopal2002
|
text-classification
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"model-index",
"region:us"
] | 1,704,195,328,000 | 2024-01-16T06:58:07 | 4 | 0 |
---
base_model: BAAI/bge-small-en-v1.5
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: <s_cord-v2><s_menu><s_nm> HINALCO INDUSTRIES LTB. HIRAKUR</s_nm><s_unitprice>
1344</s_unitprice><s_cnt> 1</s_cnt><s_price> 4,436</s_price><sep/><s_nm> ASTRICA
BRIOC</s_nm><s_unitprice> 12.082</s_unitprice><s_cnt> 1</s_cnt><s_discountprice>
12.027</s_discountprice><s_price> SUSPICY TEMPURA HIRAKUR</s_nm><s_unitprice>
12.027.00.0020</s_discountprice><s_price> PAK SUSHI HIRAKURURUR</s_nm><s_unitprice>
12.027.00.0020</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> 12.027</s_discountprice><s_price>
4,436</s_price><sep/><s_nm> SUSHI SALT CALLOCALI</s_nm><s_unitprice> 12.027.0020</s_unitprice><s_cnt>
1</s_cnt><s_discountprice> 1,003</s_discountprice><s_price> 1,00</s_price></s_menu><s_sub_total><s_subtotal_price>
3,003</s_subtotal_price><s_discount_price> 3,003<sep/> 0.00</s_discount_price></s_sub_total><s_total><s_total_price>
3,00</s_total_price><s_cashprice> 3,00</s_cashprice><s_changeprice> 1,00</s_changeprice></s_total>
- text: <s_cord-v2><s_menu><s_nm> UNDALCO INDUSTRIES LIIMITED</s_nm><s_discountprice>
-*OQU<sep/><s_nm> PYCHE DESIGNCE PURCHASE ORDER</s_nm><sep/><s_nm> WHOCO SUSHINGGA
CHOCO SUSHINGGA CHOCO SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA SUSHINGGA
SUSHINGGANG SUSHINGGANG SUSHINGGANG SUSHINGGANG SUSHINGGANG SUSHINGGANG SUSHINGGANG
SUSHINGGANG SUSHINGGANG SUSHINGGANGHONG SUSHINGGANG SUSHINGGANGHONG SUSHINGGANGHONG
SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG
SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONG SUSHINGGANGHONGHONG
POWER</s_nm><s_price> SUSHINGGANGHONGHONGHONG POWER</s_nm><s_price> SUSHINGGANGHONGHONG
POWER</s_nm><s_price> SUSHINGGANGGANGGANGGANGGANGGANGGANGGANGGA SUSHINGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGA
- text: <s_cord-v2><s_menu><s_nm> TAX INVOLICE</s_nm><s_unitprice> 2310</s_unitprice><s_cnt>
2</s_cnt><s_price> A</s_price><sep/><s_nm> BLOOM Combustion India Putu</s_nm><s_unitprice>
150,000</s_unitprice><s_cnt> 2</s_cnt><s_discountprice> 1,040<sep/><s_nm> A.C.B.C.B.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C.C
- text: <s_cord-v2><s_menu><s_nm> HINA DLCO INDUSTRIES LIMITED</s_nm><s_price> SUSHIZE</s_price><sep/><s_nm>
PONE CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCOCO CHOCO CHOCOCO CHOCOCO CHOCO CHOCOCO
CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO
CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO
CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO
CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO
CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO CHOCOCO
CHOCO CHOCO CHOCOCO CHOCOCO CHOCO CHOCOCO CHOCOCO CHOCO CHOCOCO CHOCOCO CHOCO
CHOCOCO CHOCO CHOCOCO CHOCO CHOCOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO CHOCO
- text: '<s_cord-v2><s_menu><s_nm> HNDALCO INDUSTRIES LID. HIRAKUND POWER</s_nm><s_num>
ASH WITCH BRIOGE</s_nm><s_num> HPOM: 01-Hou DATE: 0001-social<sep/><s_nm> SAH</s_nm><s_num>
DAGE NUMBER : 1</s_etc><sep/><s_nm> SINO TAKING ODAYS OATE INTINE TAKE CROSS Wc
OLOAD SLOOPPERATOR</s_nm><s_num> JGGC</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JERCEA</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER<s_num>
JER</s_nm><s_num> JER<s_num> JER</s_nm><s_num> JER<s_num> JER</s_nm><s_num> JER<s_num>
JER</s_nm><s_num> JER<s_num> JER</s_nm><s_num> JER<s_num> JER</s_nm><s_num> JER</s_nm><s_num>
JER</s_num><s_price> 0.00</s_price><sep/><s_nm> ORANGA</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num> JER</s_nm><s_num>
JER</s_nm><s_num> JER</s_nm><s_num> JER</s_total>'
inference: true
model-index:
- name: SetFit with BAAI/bge-small-en-v1.5
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 1.0
name: Accuracy
---
# SetFit with BAAI/bge-small-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2 | <ul><li>'<s_cord-v2><s_menu><s_nm> M/S. JOSEPH SUNA</s_nm><s_num> DATABI<sep/><s_nm> Tankipaa.MIRAKU SAMBALPUR.768Q16</s_nm><s_num> DMB Nb N.9861345883<sep/><s_nm> Deals Which :Altypes of ChickenGNALOR REGUMING)</s_nm><s_num> DISALI<sep/><s_nm> WINNALIZED</s_nm><s_num> CHOCO SUSPECIALIZE</s_nm><s_num> TWICENCHE<sep/><s_nm> SHRANGKANG POWER</s_nm><s_num> LATHOCO TWICENKO:</s_nm><s_num> JERYUNG CHOCO TWICENKO:</s_nm><s_num> JERYUNG HZYGANGKAN<sep/><s_nm> DIFF-SAWALAPUKU SAMBALPUR.76801GHOLIZEG DATE</s_nm><s_num> DATE</s_nm><s_num> DATE:</s_nm><s_num> 01/01/01/01/01/01/01/01/01/01/01/01/01/01/01/01/01<sep/><s_nm> PAN No.:</s_nm><s_num> PPODATE</s_nm><s_num> 01/01/01/01/01/01/01/01/01/01/01<sep/><s_nm> DATE OPSE<sep/><s_nm> HANDUPPOWER</s_nm><s_num> 30.12221</s_num><s_price> 1,945.00</s_price><sep/><s_nm> SUSPENGGANGURG.GUSTAGUR GUSTAGANGKANGURGUSTAGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTGUTG'</li><li>'<s_cord-v2><s_menu><s_nm> GST INVOLICE</s_nm><s_price> ORIGINAL FOR KEGINGLI</s_nm><s_price> WOUCE BREGRAMING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPINGLIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHIPPING SUSHI'</li><li>'<s_cord-v2><s_menu><s_nm> TAX INVOICE</s_nm><s_price> ORIGINAL FOR AQUALIZE</s_nm><s_price> SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO SUSHIKO '</li></ul> |
| 1 | <ul><li>'<s_cord-v2><s_menu><s_nm> UNDALCO INDUSTRIES LTB.</s_nm><s_unitprice> HIRAKUD POWER</s_nm><sep/><s_nm> ASH WPTCH BRIOGE</s_nm><s_unitprice> TIMOL CATE BRIOUS DATE</s_nm><s_unitprice> SUSCEE</s_nm><s_unitprice> SUSCE</s_unitprice><s_cnt> 1</s_cnt><s_price> SUSCE</s_price><sep/><s_nm> MSCED</s_nm><s_unitprice> SUSCEE</s_nm><s_unitprice> SUSCE</s_unitprice><s_cnt> 1</s_cnt><s_price> SUSCE</s_price><sep/><s_nm> MICHI CHOCO KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KESE KE'</li><li>'<s_cord-v2><s_menu><s_nm> UNDALCO INDUSTRIES LTB.</s_nm><s_unitprice> HIR A KD POWER</s_nm></s_sub><sep/><s_nm> ASH WEICH BRIOGE</s_nm><s_unitprice> 16.36.36m2</s_unitprice><s_cnt> AGE IMPL CAST SUSIC :RING LETS SUSIC SUSIC SUSIC SUSIC SUSIC SUSIC SUSCCE</s_nm></s_sub><sep/><s_nm> MSCHO</s_nm><s_unitprice> 13.45</s_unitprice><s_cnt> 1.36.36</s_cnt><s_price> 6.36</s_price><sep/><s_nm> SUSPICY TEMPLE</s_nm><s_unitprice> 14.50.13.502</s_unitprice><s_cnt> 1.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREAT TRIPSE TO WBLE</s_nm><s_unitprice> 13.35.5cs</s_unitprice><s_cnt> 1.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTY TRIPSE</s_nm><s_unitprice> 13.50</s_unitprice><s_cnt> 1.00.00.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTY TRIPSE</s_nm><s_unitprice> 13.50.50</s_unitprice><s_cnt> 1.00.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTY TRIPSE</s_nm><s_unitprice> 13.50.50</s_unitprice><s_cnt> 1.00.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTY TRIPSE</s_nm><s_unitprice> 13.50.50</s_unitprice><s_cnt> 1.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREYA TEMPLE</s_nm><s_unitprice> 13.50</s_unitprice><s_cnt> 1.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREYA TEMPLE ITEMBLE<s_cnt> 1.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREYANG TEMPLE ITEMBLE<s_cnt> 1.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREYANG TEMPLE ITEMBLE<s_cnt> 1.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTYPE 3.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATSUPER</s_nm><s_unitprice> 13.35.5cs</s_unitprice><s_cnt> 1.00</s_cnt><s_price> 5.940</s_price><sep/><s_nm> 0.00</s_price><sep/><s_nm> BRETYPETROPICPICPICPICYE</s_nm><s_unitprice> 13.50</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATTYPE 3.00.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATPICYEPIC ITEMBLE<s_cnt> 1.00</s_cnt><s_price> 0.00</s_price><sep/><s_nm> BREATSUPER</s_nm><s_unitprice> 13.50</s_cnt><s_price> 5.940.00</s_price></s_menu><s_sub_total><s_subtotal_price> 0.00</s_subtotal_price><s_tax_price> 13.50</s_tax_price></s_sub_total><s_total><s_total_price> 31.00</s_cnt><s_price> BK.00</s_total_price></s_total>'</li><li>'<s_cord-v2><s_menu><s_nm> ORI ZHDLE TOMI O JAPAN SUSHIKA JERYA CHARGE</s_nm><s_unitprice> @SAKASAKASAKA SPICKING SUSHI JER</s_nm><s_unitprice> @SAKASAKASAKASAKASAKA SPICKING SUSHI JER</s_nm><s_unitprice> @SAKASAKASAKASAKASAKA SPICKING SUSHI JER</s_nm><s_unitprice> @SAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKASAKAStakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakattakat'</li></ul> |
| 0 | <ul><li>'<s_cord-v2><s_menu><s_nm> HANDALCO 이미지ES LIMITED</s_nm><s_price> SUNDAYGHOCO SUSHIZEH CINCEHANGKAGHOCO SUSHIZEHANGKAGHOUSHIZEHANGKAGHOUSHIZEHANGKAGHOUSHIZEHANGKAGHOUSHIZEHANGKAGHOUSHIZEHANGKANG PURCHASE ORDER</s_nm><sep/><s_nm> WANTE CHOCO CAKE CONSULATANCE PYI LOTHO NUMPIC UPICK CHOCO CHOCO CHOCOCO SUSHIZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHER</s_nm><s_discountprice>Nt.Minitie HGHOCEHINE</s_nm><s_discountprice>N.Minitie HGUMAGHO</s_nm><s_discountprice>N</s_nm><s_discountprice>N.Minitie HUMAGHO</s_nm><s_discountprice>N</s_nm><s_discountprice>N</s_discountprice><s_price> 436.0</s_price><sep/><s_nm> OxMini WHEN HUMAGHUNG</s_nm><s_discountprice> SUSHIZEHITEGHOUSHILIZEHENCE COTTING THOGEHGHOCO SUSHIZEHITEGHTGHOLIZEHGHOLIZEHGHOLIZEHGHOLIZEHGPICYGLIZEHGHTG SOUTING SUSHIZEHITEGHTGHOLIZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEHZEH'</li><li>'<s_cord-v2><s_menu><s_nm> WINGllaco Industries Limited</s_nm><s_unitprice> LIKING PICCE CHOCOLOGY VICE</s_nm><s_unitprice> LIKING SUSHIBILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILILI'</li><li>'<s_cord-v2><s_menu><s_nm> HINDALCO INDUSTRIES LIMITED</s_nm><s_price> GSTING&NAACHI201</s_price><sep/><s_nm> WBABUPOWER HEROGUSTAMPURGANGKANCE CHOCOLOGALINGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGANGGA'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 1.0 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Gopal2002/setfit_zeon")
# Run inference
preds = model("<s_cord-v2><s_menu><s_nm> HINALCO INDUSTRIES LTB. HIRAKUR</s_nm><s_unitprice> 1344</s_unitprice><s_cnt> 1</s_cnt><s_price> 4,436</s_price><sep/><s_nm> ASTRICA BRIOC</s_nm><s_unitprice> 12.082</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> 12.027</s_discountprice><s_price> SUSPICY TEMPURA HIRAKUR</s_nm><s_unitprice> 12.027.00.0020</s_discountprice><s_price> PAK SUSHI HIRAKURURUR</s_nm><s_unitprice> 12.027.00.0020</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> 12.027</s_discountprice><s_price> 4,436</s_price><sep/><s_nm> SUSHI SALT CALLOCALI</s_nm><s_unitprice> 12.027.0020</s_unitprice><s_cnt> 1</s_cnt><s_discountprice> 1,003</s_discountprice><s_price> 1,00</s_price></s_menu><s_sub_total><s_subtotal_price> 3,003</s_subtotal_price><s_discount_price> 3,003<sep/> 0.00</s_discount_price></s_sub_total><s_total><s_total_price> 3,00</s_total_price><s_cashprice> 3,00</s_cashprice><s_changeprice> 1,00</s_changeprice></s_total>")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:----|
| Word count | 5 | 107.8041 | 763 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 47 |
| 1 | 51 |
| 2 | 50 |
### Training Hyperparameters
- batch_size: (32, 32)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0022 | 1 | 0.3004 | - |
| 0.1094 | 50 | 0.2457 | - |
| 0.2188 | 100 | 0.1464 | - |
| 0.3282 | 150 | 0.0079 | - |
| 0.4376 | 200 | 0.0028 | - |
| 0.5470 | 250 | 0.0027 | - |
| 0.6565 | 300 | 0.0017 | - |
| 0.7659 | 350 | 0.0014 | - |
| 0.8753 | 400 | 0.0015 | - |
| 0.9847 | 450 | 0.0011 | - |
| 1.0941 | 500 | 0.001 | - |
| 1.2035 | 550 | 0.0011 | - |
| 1.3129 | 600 | 0.001 | - |
| 1.4223 | 650 | 0.0011 | - |
| 1.5317 | 700 | 0.0011 | - |
| 1.6411 | 750 | 0.0009 | - |
| 1.7505 | 800 | 0.0008 | - |
| 1.8600 | 850 | 0.001 | - |
| 1.9694 | 900 | 0.0009 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.2
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
jwhong2006/wikisum
|
jwhong2006
|
summarization
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"en",
"dataset:d0rj/wikisum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,717,690,166,000 | 2024-06-06T16:30:30 | 28 | 0 |
---
base_model: t5-small
datasets:
- d0rj/wikisum
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- rouge
pipeline_tag: summarization
tags:
- generated_from_trainer
widget:
- text: 'Do not shuck or wash your oysters. Oysters taste best when you shuck them
immediately before eating them. In addition, keeping oysters in their shells makes
them easier to store and reduces the chance that they''ll go bad. If your oysters
came pre-shucked in a plastic container, store them in the freezer until you''re
ready to use them. Leave the grit and dirt on the oysters. This will keep them
moist and will help to insulate the meat. Pour ice into a small bowl or other
open-top container. Grab a bowl, small cooler, or similar container that you can
place inside your fridge. Make sure this container has an open top or removable
lid. Then, pour a layer of ice into the bottom of the container. Do not keep your
oysters in a sealed or closed-top container. Doing so will suffocate them. You
may need to change your ice during the refrigeration process, so do not pour any
into the container if you won''t be able to check your oysters regularly. Place
your oysters on top of the ice bed deep side down. Just like seafood merchants,
you''ll be storing your oysters on ice to keep them as chilled and fresh as possible.
Make sure to turn each of your oysters so that the deeper side faces down, a technique
that will help them better retain their juices. Dampen a towel with cold water
and place it on top of the oysters. Dip a thin, clean kitchen towel in cold water
and ring out the excess liquid. Then, gently lay the towel on top of the oysters.
This will keep the oysters from drying out while preventing fresh water poisoning.
If you''d prefer, you can cover the oysters with damp paper towels or newspaper
instead. Oysters are salt water creatures, so submerging them in fresh water will
essentially poison them and lead to their death. Place your container in a refrigerator.
If possible, set your refrigerator to a temperature between 35 and 40 °F (2 and
4 °C). Make sure to store your oysters above any raw meat so the juices don''t
drip down onto your shellfish. If possible, check on your oysters at least once
a day while they''re in the fridge. If the towel dries out, dampen it again. If
the ice in your container melts, pour it out and replace it with new ice. Keep
your oysters in the fridge for up to 2 days. For safety, remove and consume your
oysters within about 2 days of initially storing them. Though some oysters may
last for a week or longer, eating them that late puts you at greater risk of food
poisoning and other unwanted ailments. If your oysters came with an expiration
date, use that as your guide for maximum storage time. Freeze your oysters if
you need to store them for more than 2 days. Shuck the oysters when you’re ready
to eat them. Once you finish storing the oysters, run them under cool water and
open their shells. Then, run a knife under the flat side of the oyster and pop
the shell off. Before eating, carefully separate the oyster from the rest of the
shell using a knife. Before eating an oyster, inspect it to make sure it is still
good. If the shell appears to be damaged, if the oyster smells foul, or if the
meat is a cloudy shade of grey, brown, black, or pink, throw the oyster away.
Keep the oysters in their shells and rinse them off. Storing your oysters inside
their shells will make them less likely to go bad and, in some cases, better preserve
their taste. Unlike refrigerating oysters, rinsing the shells under cold water
to clean them off prevents any bacteria from living on the oysters. If you don''t
have enough room in your freezer to keep full-shelled oysters, you can shuck them
before storage. If you do so, save the internal liquor for later use. Place your
oysters in a freezer-safe container. To keep your oysters safe, place them inside
a moisture-resistant, freezer-safe bag. If you''re storing shucked oysters, you
can use a firm plastic container instead. To prevent freezer burns, leave no more
than 0.5 in (1.3 cm) of head space in the container. Pour oyster liquor into the
container if you’re freezing shucked oysters. To help your shucked oysters retain
their juiciness, pour the liquor you removed during the shucking process into
your freezer-safe container. Keep pouring until you''ve completely submerged the
oysters inside the liquid. If you don''t have enough liquor to fill the container,
pour in water as well. Seal the container. If you''re using a resealable bag,
press any excess air out of it using your fingers. Then, seal your container right
before you put it into the freezer. Unlike with refrigerated oysters, closing
the container will help better preserve your shellfish during long-term storage.
If you''re using a solid plastic container, make sure the lid you seal it with
is air-tight. Make sure to write the initial storage date on your container. Keep
your oysters in the freezer for up to 3 months. When frozen properly, fresh oysters
should last for between 2 and 3 months. To make sure your oysters aren''t going
bad, look over them regularly and remove any that have cracked shells or cloudy
meat that is a pink, black, brown, or grey color. While your oysters may remain
safe to eat during this time, the taste will degrade gradually. Thaw your oysters
in the fridge before consuming. Carefully take your oyster container out of the
freezer and place it in a clear, open part of your refrigerator. Depending on
the exact temperature of your appliances, the thawing process could take up to
20 hours to complete. Thawing your oysters using this method gives them a slightly
longer shelf life, meaning you don''t have to use them immediately after they
thaw. If you''d like, you can thaw your oysters by submerging their container
in cold water. However, you''ll have to consume them immediately after they thaw,
otherwise they''ll go bad. '
model-index:
- name: wikisum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wikisum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an [wikisum](https://huggingface.co/datasets/d0rj/wikisum) dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2922
- Rouge1: 0.1811
- Rouge2: 0.0673
- Rougel: 0.147
- Rougelsum: 0.147
- Gen Len: 19.0
## Model description
t5-small model fine-tuned on wikisum dataset.
## Intended uses & limitations
Intended use: sumamrization of informatic articles.
Limitations : may generate misleading information.
## Training and evaluation data
check out [wikisum](https://huggingface.co/datasets/d0rj/wikisum) dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 2.5807 | 0.2236 | 500 | 2.3647 | 0.1813 | 0.0635 | 0.1452 | 0.1453 | 19.0 |
| 2.5059 | 0.4472 | 1000 | 2.3190 | 0.1823 | 0.0663 | 0.1473 | 0.1473 | 19.0 |
| 2.4945 | 0.6708 | 1500 | 2.3003 | 0.1808 | 0.0666 | 0.1468 | 0.1467 | 19.0 |
| 2.4963 | 0.8945 | 2000 | 2.2922 | 0.1811 | 0.0673 | 0.147 | 0.147 | 19.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
SZTAKI-HLT/Bert2Bert-HunSum-1
|
SZTAKI-HLT
|
text2text-generation
|
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"hubert",
"bert",
"summarization",
"hu",
"dataset:SZTAKI-HLT/HunSum-1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,673,087,758,000 | 2023-01-24T16:21:16 | 137 | 2 |
---
datasets:
- SZTAKI-HLT/HunSum-1
language:
- hu
metrics:
- rouge
pipeline_tag: text2text-generation
tags:
- hubert
- bert
- summarization
inference:
parameters:
num_beams: 5
length_penalty: 2
max_length: 128
no_repeat_ngram_size: 3
early_stopping: true
---
# Model Card for Bert2Bert-HunSum-1
The Bert2Bert-HunSum-1 is a Hungarian abstractive summarization model, which was trained on the [SZTAKI-HLT/HunSum-1 dataset](https://huggingface.co/datasets/SZTAKI-HLT/HunSum-1).
The model is based on [SZTAKI-HLT/hubert-base-cc](https://huggingface.co/SZTAKI-HLT/hubert-base-cc).
## Intended uses & limitations
- **Model type:** Text Summarization
- **Language(s) (NLP):** Hungarian
- **Resource(s) for more information:**
- [GitHub Repo](https://github.com/dorinapetra/summarization)
## Parameters
- **Batch Size:** 13
- **Learning Rate:** 5e-5
- **Weight Decay:** 0.01
- **Warmup Steps:** 16000
- **Epochs:** 15
- **no_repeat_ngram_size:** 3
- **num_beams:** 5
- **early_stopping:** True
## Results
| Metric | Value |
| :------------ | :------------------------------------------ |
| ROUGE-1 | 28.52 |
| ROUGE-2 | 10.35 |
| ROUGE-L | 20.07 |
## Citation
If you use our model, please cite the following paper:
```
@inproceedings {HunSum-1,
title = {{HunSum-1: an Abstractive Summarization Dataset for Hungarian}},
booktitle = {XIX. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2023)},
year = {2023},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Magyarország},
author = {Barta, Botond and Lakatos, Dorina and Nagy, Attila and Nyist, Mil{\'{a}}n Konor and {\'{A}}cs, Judit},
pages = {231--243}
}
```
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
iryneko571/mt5-translation-ja_zh-game-small
|
iryneko571
|
translation
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"translation",
"ja",
"zh",
"dataset:ayymen/Pontoon-Translations",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,707,147,162,000 | 2024-07-04T10:41:17 | 23 | 0 |
---
datasets:
- ayymen/Pontoon-Translations
language:
- ja
- zh
license: mit
pipeline_tag: translation
widget:
- text: <-ja2zh-> フェルディナント・ラッサール \n は、プロイセンの政治学者、哲学者、法学者、社会主義者、労働運動指導者。ドイツ社会民主党の母体となる全ドイツ労働者同盟の創設者である。社会主義共和政の統一ドイツを目指しつつも、……
inference:
parameters:
repetition_penalty: 1.4
---
# new model:
iryneko571/mt5-small-translation-ja_zh<br>
better in most aspects, more like a base model with pure data<br>
数值上更好,是用更纯的数据跑的<br>
includes colab notebook<br>
已经配置了colab的notebook,可以直接测试翻译,不需要安装<br>
# Release Notes
* this model is finetuned from mt5-small
* will use about 1.5G vram, fp16 will be less than 1G(if batch size is small), cpu inference speed is ok anyway
* used a trimmed piece of pontoon dataset that features ja to zh translate part
* also scrambled bunch of the translation from mt5-translation-ja_zh-game-v0.1, which is a large amount of junk for training
* reason for making this model<br>
Testing the ideas of using pontoon dataset<br>
Constructing a flexible translation evaluation standard, need a poor performance model to compare
# 模型公开声明
* 这个模型由 mt5-translation-ja_zh 继续训练得来
* 使用大于1.5g的显存,fp16载入会小于1G显存(batch拉高会大于1G),使用cpu运作速度也还可以
* 制作这个模型的原因<br>
尝试使用现有的模型精调,小模型训练速度奇快<br>
* 本模型缺陷<br>
本身就是用来做测试的,虽然使用的显存很低,但翻译能力奇差<br>
# 简单的后端应用
还没稳定调试,慎用
* https://github.com/IryNeko/RabbitCafe
# A more precise example using it
# 使用指南
```python
from transformers import pipeline
model_name="iryneko571/mt5-translation-ja_zh-game-small"
#pipe = pipeline("translation",model=model_name,tokenizer=model_name,repetition_penalty=1.4,batch_size=1,max_length=256)
pipe = pipeline("translation",
model=model_name,
repetition_penalty=1.4,
batch_size=1,
max_length=256
)
def translate_batch(batch, language='<-ja2zh->'): # batch is an array of string
i=0 # quickly format the list
while i<len(batch):
batch[i]=f'{language} {batch[i]}'
i+=1
translated=pipe(batch)
result=[]
i=0
while i<len(translated):
result.append(translated[i]['translation_text'])
i+=1
return result
inputs=[]
print(translate_batch(inputs))
```
# Roadmap
* Scamble more translation results from gpt4o, gpt3.5, claude, mt5 and other sources to make a more messy input
* increase translation accuracy
* apply lora on it and apply int8 inference to further decrease hardware requirements
* create onnx and ncnn model
# how to find me
# 找到作者
Discord Server:<br>
https://discord.gg/JmjPmJjA<br>
If you need any help, a test server or just want to chat<br>
如果需要帮助,需要试试最新的版本,或者只是为了看下我是啥,可以进channel看看(这边允许发布这个吗?)<br>
|
[
"TRANSLATION"
] |
Non_BioNLP
|
aandyluna/mt5-small-finetuned-amazon-en-es
|
aandyluna
|
summarization
|
[
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,733,285,254,000 | 2024-12-04T05:55:17 | 42 | 0 |
---
base_model: google/mt5-small
library_name: transformers
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0193
- Rouge1: 17.0896
- Rouge2: 8.362
- Rougel: 16.735
- Rougelsum: 16.8131
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.6768 | 1.0 | 1209 | 3.2182 | 17.7059 | 9.3629 | 17.1633 | 17.2774 |
| 3.6447 | 2.0 | 2418 | 3.1029 | 17.4241 | 8.8479 | 16.9706 | 16.9578 |
| 3.4304 | 3.0 | 3627 | 3.0759 | 15.8371 | 7.5702 | 15.2312 | 15.3302 |
| 3.3128 | 4.0 | 4836 | 3.0706 | 16.9745 | 8.7666 | 16.559 | 16.6638 |
| 3.2203 | 5.0 | 6045 | 3.0339 | 16.3788 | 7.769 | 15.9624 | 16.027 |
| 3.1651 | 6.0 | 7254 | 3.0283 | 16.4083 | 8.0507 | 15.9778 | 16.1114 |
| 3.1387 | 7.0 | 8463 | 3.0188 | 16.6289 | 8.2229 | 16.3528 | 16.3952 |
| 3.1139 | 8.0 | 9672 | 3.0193 | 17.0896 | 8.362 | 16.735 | 16.8131 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
ocm/distilbert-base-uncased-finetuned-emotion
|
ocm
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,667,042,147,000 | 2022-11-05T17:45:19 | 10 | 0 |
---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- type: accuracy
value: 0.935
name: Accuracy
- type: f1
value: 0.9351083637430424
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1582
- Accuracy: 0.935
- F1: 0.9351
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7703 | 1.0 | 250 | 0.2588 | 0.918 | 0.9165 |
| 0.2031 | 2.0 | 500 | 0.1773 | 0.928 | 0.9282 |
| 0.1385 | 3.0 | 750 | 0.1593 | 0.934 | 0.9342 |
| 0.1101 | 4.0 | 1000 | 0.1582 | 0.935 | 0.9351 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
Netta1994/setfit_baai_wix_qa_gpt-4o_improved-cot_chat_few_shot_remove_final_evaluation_e1_one_o
|
Netta1994
|
text-classification
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"model-index",
"region:us"
] | 1,727,096,564,000 | 2024-09-23T13:03:20 | 7 | 0 |
---
base_model: BAAI/bge-base-en-v1.5
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'Reasoning:
The answer is detailed, specific, and accurately reflects the information provided
in the document. It directly addresses the steps necessary to change the reservation
reference from the service page to the booking calendar.
Evaluation:'
- text: 'Reasoning:
The provided answer describes the process of blocking off time in the calendar
to prevent customers from booking slots during those times. However, the question
specifically asks about removing the time from showing on the booking button,
not just blocking off time. The answer does not address the correct query and
misinterprets the request.
Evaluation:'
- text: 'Reasoning:
The provided answer is broadly accurate but lacks the direct mention of the error
message "You do not have access to Email," which is present in the document. It
also misses the context provided in the document to directly address the user''s
question, and didn''t include verifying the domain as part of enabling the calendar
scheduling and recording.
Evaluation:'
- text: 'Reasoning:
The provided answer is clear and instructive, reflecting the instructions in the
document precisely. It includes all necessary steps, matches the information from
the document, and even addresses prerequisites like having a premium plan and
a connected domain.
Evaluation:'
- text: 'Reasoning:
The answer provided here is accurate and aligns well with the details found in
the document. It outlines all the necessary steps and prerequisites, such as upgrading
to a business & ecommerce premium plan, and correctly explains the process within
the context of Editor X. It also offers relevant additional information about
the visibility of service list pages and member pages, ensuringa comprehensive
response.
Final Evaluation:'
inference: true
model-index:
- name: SetFit with BAAI/bge-base-en-v1.5
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.7291666666666666
name: Accuracy
---
# SetFit with BAAI/bge-base-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'Reasoning:\nThe answer directly contradicts the correct storing methods detailed in the document. The answer contains advice that could damage jewelry, such as storing it in high humidity areas and keeping diamonds together.\n\nEvaluation:'</li><li>'Reasoning:\nContradiction - The document clearly states that Chopin met Felix Mendelssohn at the music festival in 1834, not Ludwig van Beethoven.\n\nEvaluation:'</li><li>'Reasoning:\nincomplete - The answer is not relevant to what is being asked, it provides information unrelated to the Angel & Faith Season Ten comic book series.\nEvaluation:'</li></ul> |
| 1 | <ul><li>'Reasoning:\nThe answer efficiently captures the main character from the book "Chase In Shadow (Johnnies #1)" and accurately describes the dual aspects of his life, with information directly supported by the document.\nEvaluation:'</li><li>'Reasoning:\nfactual error - The answer includes a factual error that directly contradicts the information available in the document.\nEvaluation:'</li><li>'Reasoning:\nThe answer correctly identifies the main statement of the Equal Rights Amendment and aligns with the content provided in the document.\n\nEvaluation:'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.7292 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_wix_qa_gpt-4o_improved-cot_chat_few_shot_remove_final_evaluation_e1_one_o")
# Run inference
preds = model("Reasoning:
The answer is detailed, specific, and accurately reflects the information provided in the document. It directly addresses the steps necessary to change the reservation reference from the service page to the booking calendar.
Evaluation:")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 37.6205 | 156 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 304 |
| 1 | 339 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0006 | 1 | 0.2284 | - |
| 0.0311 | 50 | 0.2525 | - |
| 0.0622 | 100 | 0.2453 | - |
| 0.0933 | 150 | 0.2317 | - |
| 0.1244 | 200 | 0.2263 | - |
| 0.1555 | 250 | 0.2167 | - |
| 0.1866 | 300 | 0.1779 | - |
| 0.2177 | 350 | 0.1659 | - |
| 0.2488 | 400 | 0.1149 | - |
| 0.2799 | 450 | 0.0699 | - |
| 0.3109 | 500 | 0.0595 | - |
| 0.3420 | 550 | 0.0472 | - |
| 0.3731 | 600 | 0.0429 | - |
| 0.4042 | 650 | 0.0343 | - |
| 0.4353 | 700 | 0.0242 | - |
| 0.4664 | 750 | 0.0201 | - |
| 0.4975 | 800 | 0.0137 | - |
| 0.5286 | 850 | 0.0123 | - |
| 0.5597 | 900 | 0.0148 | - |
| 0.5908 | 950 | 0.0119 | - |
| 0.6219 | 1000 | 0.011 | - |
| 0.6530 | 1050 | 0.0129 | - |
| 0.6841 | 1100 | 0.0108 | - |
| 0.7152 | 1150 | 0.0082 | - |
| 0.7463 | 1200 | 0.0131 | - |
| 0.7774 | 1250 | 0.0105 | - |
| 0.8085 | 1300 | 0.0087 | - |
| 0.8396 | 1350 | 0.0097 | - |
| 0.8706 | 1400 | 0.011 | - |
| 0.9017 | 1450 | 0.0056 | - |
| 0.9328 | 1500 | 0.0109 | - |
| 0.9639 | 1550 | 0.0076 | - |
| 0.9950 | 1600 | 0.009 | - |
### Framework Versions
- Python: 3.10.14
- SetFit: 1.1.0
- Sentence Transformers: 3.1.1
- Transformers: 4.44.0
- PyTorch: 2.4.0+cu121
- Datasets: 3.0.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
tamarab/bert-emotion
|
tamarab
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,653,065,112,000 | 2022-05-20T19:12:14 | 116 | 0 |
---
datasets:
- tweet_eval
license: apache-2.0
metrics:
- precision
- recall
tags:
- generated_from_trainer
model-index:
- name: bert-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
args: emotion
metrics:
- type: precision
value: 0.7462955517135084
name: Precision
- type: recall
value: 0.7095634380533169
name: Recall
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-emotion
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1347
- Precision: 0.7463
- Recall: 0.7096
- Fscore: 0.7209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.8385 | 1.0 | 815 | 0.8366 | 0.7865 | 0.5968 | 0.6014 |
| 0.5451 | 2.0 | 1630 | 0.9301 | 0.7301 | 0.6826 | 0.6947 |
| 0.2447 | 3.0 | 2445 | 1.1347 | 0.7463 | 0.7096 | 0.7209 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
Netta1994/setfit_baai_gpt-4o_cot-few_shot_remove_final_evaluation_e1_one_big_model_1727080822.0
|
Netta1994
|
text-classification
|
[
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"region:us"
] | 1,727,080,822,000 | 2024-09-23T08:40:53 | 7 | 0 |
---
base_model: BAAI/bge-base-en-v1.5
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'The provided answer is overall accurate, complete, and relevant to the query
about performing a male manicure. The steps, including soaking hands, scrubbing
nails, clipping nails, applying cuticle remover, pushing back cuticles, smoothing
edges with a file, and moisturizing, are all appropriately mentioned with detailed
instructions. The answer aligns well with the informationprovided in the document.
Final evaluation:'
- text: 'The answer provided discusses Kieron Freeman and his time with Notts County,
specifically mentioning that Martin Allen signed him when he went on loan there.
However, the question is about Aaron Pryor''s manager during his boxing career,
which is completely unrelated to the context provided in the answer and the document.
Final evaluation:'
- text: 'The provided answer states that "The concern regarding the usage of online
casinos is the risk of user data being compromised." However, this response is
irrelevant to the question asking about the concern of the husband of the person
who wrote the message on July 10, 2011, which completely mismatches the context
provided in the document.
Considering that the evaluation focuses on the accuracy and relevance of the provided
answer based on the provided question and document:
The final evaluation:'
- text: 'Evaluation:
The answer provided is completely unrelated to the question asked about painting
countertops. The answer discusses how to meet a crush for the first time, which
is not relevant to painting countertops.
Final evaluation:'
- text: 'The answer provided accurately states that Allan Cox''s First Class Delivery
was launched on a H128-10W for his Level 1 certification flight. This information
is directly retrieved from the document.
The final evaluation:'
inference: true
---
# SetFit with BAAI/bge-base-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>"The answer incorporates several elements not mentioned in the provided document, specifically the references to a virtual reality training technique and its impact on player decision-making. These aspects are not mentioned in the document, rendering the information inaccurate.\n\nIn the actual document, the offensive outburst of the Nuggets is attributed to coach Brian Shaw's strategy of encouraging players to take the first available shot in the rhythm of the offense and push the ball after makes and misses. The comfort and effectiveness in these strategies coming together are cited as reasons for the increased scoring.\n\nTherefore, the provided answer is flawed due to the inclusionof fabricated details.\n\nThe final evaluation:"</li><li>'The answer provided contains several inaccuracies and fabrications that do not align with the content of the document.\n\n1. **Film Under-Exposure Statement**: The answer erroneously states that "film under-exposes better than a digital sensor," whereas the document clearly mentions that "film over-exposes better than a digital sensor."\n\n2. **Color Compression Errors**: The answer claims film compresses exposure range into the "bottom end" and colors saturate to black, but the document specifies it compresses into the "top end" and colors desaturate to white.\n\n3. **Sensor Details**: The answer inaccurately mentions that digital sensors capture all three colors at each point when in reality it is stated that "Film also captures all three colors at every point. Digital sensors (all but Fovian, anyway) capture only one color at each point and then interpolate between them."\n\n4. **Megapixel Comparison**: The claim that the author finds "5MP digital sensors of today to be about comparable to high-end, professional film" is incorrect. The document actually compares "10MP digital sensors of today" to common, non-professional film for resolution.\n\nGiven these significant discrepancies and inaccuracies, the answer provided is unreliable and does not accurately reflect the document\'s content.\n\nThe final evaluation:'</li><li>'The provided answer addresses an entirely different topic—providing details about fighters and outcomes from a mixed martial arts event rather than discussing the main conflict in the third book of the Arcana Chronicles by Kresley Cole. The answer did not address the question at all. \n\nFinal evaluation:'</li></ul> |
| 1 | <ul><li>"The answer provided addresses the key elements that align with the best practices outlined in the document:\n\n1. **Getting to Know the Client**: The answer mentions understanding the client's needs, wants, and goals before starting the web design process, which is directly echoed in the document.\n\n2. **Signing a Contract**: The answer highlights the importance of having a detailed contract that outlines the scope of the project, costs, and how future revisions will be managed. This ensures that there are clear parameters and a point of reference if excessive requests arise.\n\n3. **Honesty and Diplomacy**: The answer advises showcasing a sense of honesty and diplomacy, particularly when extra charges are necessary or when certain requests are unfeasible. This aligns with the document's advice on effective communication and managing client expectations diplomatically.\n\nOverall, the answer aligns well with the recommendations provided in the document.\n\nThe final evaluation:"</li><li>"The answer provided is accurate and aligns well with the content of the document. The document discusses the importance of drawing on an author's own emotional experiences, particularly pain and emotion, to create genuine and relatable characters. This approach helps forge a connection between the reader and the characters.\n\nFinal evaluation:"</li><li>'The answer is directly substantiated by the document. It clearly mentions that Mauro Rubin, the CEO of JoinPad, was present at the event at Talent Garden Calabiana, Milan. The answer is concise and provides the exact information asked in the question without any extraneous details. \n\nFinal evaluation:'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_baai_gpt-4o_cot-few_shot_remove_final_evaluation_e1_one_big_model_1727080822.0")
# Run inference
preds = model("The answer provided accurately states that Allan Cox's First Class Delivery was launched on a H128-10W for his Level 1 certification flight. This information is directly retrieved from the document.
The final evaluation:")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 12 | 75.0147 | 301 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 199 |
| 1 | 209 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0010 | 1 | 0.2249 | - |
| 0.0490 | 50 | 0.2456 | - |
| 0.0980 | 100 | 0.1748 | - |
| 0.1471 | 150 | 0.0861 | - |
| 0.1961 | 200 | 0.051 | - |
| 0.2451 | 250 | 0.0613 | - |
| 0.2941 | 300 | 0.0325 | - |
| 0.3431 | 350 | 0.0128 | - |
| 0.3922 | 400 | 0.0075 | - |
| 0.4412 | 450 | 0.007 | - |
| 0.4902 | 500 | 0.004 | - |
| 0.5392 | 550 | 0.0027 | - |
| 0.5882 | 600 | 0.0023 | - |
| 0.6373 | 650 | 0.0019 | - |
| 0.6863 | 700 | 0.0018 | - |
| 0.7353 | 750 | 0.0017 | - |
| 0.7843 | 800 | 0.0017 | - |
| 0.8333 | 850 | 0.0016 | - |
| 0.8824 | 900 | 0.0016 | - |
| 0.9314 | 950 | 0.0015 | - |
| 0.9804 | 1000 | 0.0014 | - |
### Framework Versions
- Python: 3.10.14
- SetFit: 1.1.0
- Sentence Transformers: 3.1.1
- Transformers: 4.44.0
- PyTorch: 2.4.0+cu121
- Datasets: 3.0.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
pkaustubh4/QnA_BERT
|
pkaustubh4
|
question-answering
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] | 1,692,193,084,000 | 2023-08-16T20:43:31 | 10 | 0 |
---
datasets:
- squad
language:
- en
license: mit
---
# Question Answering with DistilBERT README
This repository contains code to train a Question Answering model using the DistilBERT architecture on the SQuAD (Stanford Question Answering Dataset) dataset. The model is trained to answer questions based on a given context paragraph. The training process utilizes PyTorch, the Hugging Face transformers library, and the datasets library.
## Prerequisites
Before running the code, make sure you have the following installed:
NVIDIA GPU (for faster training, optional but recommended)
NVIDIA CUDA Toolkit (if using GPU)
Python 3.x
Jupyter Notebook or another Python environment
## Installation
You can set up your environment by running the following commands:
```bash
!nvidia-smi # Check GPU availability
!pip install -q transformers datasets torch tqdm
```
## Usage
- Loading and Preprocessing Data: The code loads the SQuAD dataset and selects a subset for training. You can adjust the subset_size variable to control the size of the subset.
- Tokenization and Dataset Creation: The QADataset class is defined to preprocess and tokenize the data for training. It converts question and context pairs into tokenized format suitable for DistilBERT input. It also prepares the start and end positions for the answers in the context.
- Model Configuration: The model is based on the DistilBERT architecture, specifically the "distilbert-base-cased" version.
- Training Loop: The code sets up a training loop for a specified number of epochs. It trains the model to predict the start and end positions of the answer span in the context paragraph.
- Saving the Model: The final trained model is saved to a specified directory in Google Drive. You can adjust the final_model_output_dir variable to change the save location.
## Training
To train the model, follow these steps:
- Run the provided code cells in a Jupyter Notebook or Python environment.
- The code will load the dataset, tokenize it, and set up the training loop.
- The model's training progress will be displayed using a progress bar.
- After training completes, the final trained model will be saved to the specified directory in Google Drive.
## Notes
- This code assumes you are using Google Colab to access the Google Drive API for saving the model. If you're using a different environment, you might need to adjust the saving mechanism.
- Make sure you have sufficient space in your Google Drive to save the model.
- You can modify hyperparameters such as batch size, learning rate, and the number of epochs to experiment with different training settings.
## Credits
- The code in this repository is based on the Hugging Face Transformers library and the SQuAD dataset.
- [DistilBERT](https://huggingface.co/transformers/model_doc/distilbert.html)
- [SQuAD dataset](https://rajpurkar.github.io/SQuAD-explorer/)
## License
This code is provided under the MIT License. Feel free to modify and use it as needed.
|
[
"QUESTION_ANSWERING"
] |
Non_BioNLP
|
fcogidi/pegasus-arxiv
|
fcogidi
|
summarization
|
[
"transformers.js",
"onnx",
"pegasus",
"text2text-generation",
"summarization",
"en",
"region:us"
] | 1,733,007,079,000 | 2024-12-01T00:20:43 | 18 | 0 |
---
language:
- en
library_name: transformers.js
pipeline_tag: summarization
---
https://huggingface.co/google/pegasus-arxiv with ONNX weights compatible with Transformers.js.
**NOTE**: As of 2024-11-30 Transformers.js does not support `PegasusTokenizer`.
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
wildgrape14/distilbert-base-uncased-finetuned-emotion
|
wildgrape14
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,691,668,660,000 | 2023-08-10T11:57:57 | 8 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.925
name: Accuracy
- type: f1
value: 0.9249069634242804
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2187
- Accuracy: 0.925
- F1: 0.9249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8142 | 1.0 | 250 | 0.3171 | 0.9095 | 0.9082 |
| 0.2524 | 2.0 | 500 | 0.2187 | 0.925 | 0.9249 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.4
- Tokenizers 0.13.3
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
cvapict/yhi-message-type-all-MiniLM-L6-v2
|
cvapict
|
text-classification
|
[
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,693,396,086,000 | 2023-08-30T11:48:43 | 8 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# cvapict/yhi-message-type-all-MiniLM-L6-v2
{'accuracy': 0.8048780487804879}
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("cvapict/yhi-message-type-all-MiniLM-L6-v2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
IMISLab/GreekT5-umt5-base-greeksum
|
IMISLab
|
summarization
|
[
"transformers",
"pytorch",
"umt5",
"text2text-generation",
"summarization",
"el",
"arxiv:2311.07767",
"arxiv:2304.00869",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,699,790,884,000 | 2024-08-02T09:14:45 | 41 | 1 |
---
language:
- el
license: apache-2.0
metrics:
- bertscore
- rouge
pipeline_tag: summarization
widget:
- text: 'Να πάρει ""ξεκάθαρη"" θέση σε σχέση με τον κίνδυνο μετάδοσης του κορονοϊού
από τη Θεία Κοινωνία καλεί την κυβέρνηση και τον Πρωθυπουργό με ανακοίνωσή
του τη Δευτέρα ο ΣΥΡΙΖΑ. ""Την ώρα που κλείνουν προληπτικά και ορθώς σχολεία,
πανεπιστήμια, γήπεδα και λαμβάνονται ειδικά μέτρα ακόμη και για την ορκωμοσία
της νέας Προέδρου της Δημοκρατίας, η Ιερά Σύνοδος της Εκκλησίας της Ελλάδος
επιμένει ότι το μυστήριο της Θείας Κοινωνίας δεν εγκυμονεί κινδύνους μετάδοσης
του κορονοϊού, καλώντας όμως τις ευπαθείς ομάδες να μείνουν σπίτι τους"",
αναφέρει η αξιωματική αντιπολίτευση και συνεχίζει: ""Ωστόσο το πρόβλημα
δεν είναι τι λέει η Ιερά Σύνοδος, αλλά τι λέει η Πολιτεία και συγκεκριμένα
ο ΕΟΔΥ και το Υπουργείο Υγείας, που έχουν και την αποκλειστική κοινωνική
ευθύνη για τη μη εξάπλωση του ιού και την προστασία των πολιτών"". ""Σε άλλες
ευρωπαϊκές χώρες με εξίσου μεγάλο σεβασμό στη Χριστιανική πίστη και στο
θρησκευτικό συναίσθημα, τα μυστήρια της Εκκλησίας είτε αναστέλλονται είτε
τροποποιούν το τελετουργικό τους. Μόνο στη χώρα μας έχουμε το θλιβερό προνόμιο
μιας πολιτείας που δεν τολμά να πει το αυτονόητο"", προσθέτει, τονίζοντας
ότι ""η κυβέρνηση λοιπόν και το Υπουργείο Υγείας οφείλουν να πάρουν δημόσια
μια ξεκάθαρη θέση και να μην θυσιάζουν τη δημόσια Υγεία στο βωμό του πολιτικού
κόστους"". ""Συμφωνούν ότι η Θεία Κοινωνία δεν εγκυμονεί κινδύνους μετάδοσης
του κορονοϊού; Δεν είναι θέμα ευσέβειας αλλά κοινωνικής ευθύνης. Και με
τη Δημόσια υγεία δεν μπορούμε να παίζουμε"", καταλήγει η ανακοίνωση του
γραφείου Τύπου του ΣΥΡΙΖΑ. *ΠΩΣ ΜΕΤΑΔΙΔΕΤΑΙ. Χρήσιμος οδηγός για να προστατευθείτε
από τον κορονοϊό *ΤΑ ΝΟΣΟΚΟΜΕΙΑ ΑΝΑΦΟΡΑΣ. Ποια θα υποδέχονται τα κρούσματα
κορονοϊού στην Ελλάδα. *ΤΑΞΙΔΙΑ. Κορονοϊός και αεροδρόμια: Τι να προσέξετε.
*Η ΕΠΙΔΗΜΙΑ ΣΤΟΝ ΠΛΑΝΗΤΗ. Δείτε LIVE χάρτη με την εξέλιξη του κορονοϊού.'
example_title: Politics
- text: 'Με άρθρο της με τίτλο ""Επιστρέψτε στη θεά Ίριδα το σώμα της"", η εφημερίδα
Washington Post τάσσεται υπέρ της επιστροφής των γλυπτών του Παρθενώνα, στην
Αθήνα, στην κοιτίδα του δυτικού πολιτισμού, τώρα που οι συνθήκες έχουν
αλλάξει για την πάλαι ποτέ αυτοκρατορία της Αγγλίας. Αναφερόμενη στις διαφορετικές
απόψεις Ελλήνων και Βρετανών για τα γλυπτά, η συντάκτρια του άρθρου, τονίζει
ότι το αίτημα επιστροφής έχει αποκτήσει μεγαλύτερο βάρος τώρα που το Ηνωμένο
Βασίλειο εγκαταλείπει την Ευρωπαϊκή Ένωση. «Όταν ο Τόμας Μπρους, έβδομος
κόμης του Έλγιν, και 11ος κόμης του Κινκαρντίν, ταξίδεψε στην Ακρόπολη στις
αρχές της δεκαετίας του 1800, ως Βρετανός πρέσβης στην Οθωμανική Αυτοκρατορία,
ο Σουλτάνος λέγεται ότι του έδωσε την άδεια να ""αφαιρέσει μερικά τμήματα
λίθων με παλιές επιγραφές και μορφές"". Ο λόρδος το εξέλαβε ως άδεια να
αφαιρέσει, περίπου, 17 αγάλματα από τα αετώματα, 15 μετώπες, και 247 πόδια
(περίπου 75 μέτρα) της ζωφόρου από τον Παρθενώνα για να τα φέρει στην καλή
μας Αγγλία» αναφέρει στο άρθρο της η Washington Post. Και συνεχίζει λέγοντας
ότι «οι καιροί όμως άλλαξαν και αυτό που θεωρούνταν πιο δικαιολογημένο
τότε, σήμερα θεωρείται ευρέως ως μια ασυνείδητη πράξη». Σε μία έμμεση
αναφορά στο Brexit, και υπεραμυνόμενη της επιστροφής των γλυπτών στην Ελλάδα,
η συντάκτρια του άρθρου της Washington Post, διερωτάται: «Γιατί να παραμείνουν
τα μάρμαρα στη φύλαξη της χώρας που επιμένει ότι ανήκει μόνο στον εαυτό
της;» και σημειώνει: «Η Ελλάδα τιμάται σήμερα ως λίκνο του δυτικού πολιτισμού,
και ποιοί παρά οι Έλληνες θα μπορούσαν να στεγάσουν τον πολιτισμό αυτό;».'
example_title: Culture
- text: Το Διεθνές Νομισματικό Ταμείο (ΔΝΤ) προβλέπει ένα χρέος ρεκόρ των πλούσιων
χωρών το 2014 και κρίνει ""πιθανό"" να υπάρξει επιπλέον συμβολή των πιο
εύπορων προσώπων και των πολυεθνικών επιχειρήσεων σε μια μείωση των ελλειμμάτων,
σύμφωνα με έκθεσή του η οποία δόθηκε σήμερα στη δημοσιότητα. ""Φαίνεται
ότι υπάρχει ένα επαρκές περιθώριο σε πολλές ανεπτυγμένες χώρες για να
αντληθούν επιπλέον έσοδα από τα πιο υψηλά εισοδήματα"", υπογραμμίζει το
ΔΝΤ στην έκθεσή του για την δημοσιονομική επιτήρηση. Κατά μέσον όρο, το
δημόσιο χρέος των ανεπτυγμένων χωρών αναμένεται να φτάσει το ""ιστορικό
υψηλό"" του 110% του ΑΕΠ τους το 2014, δηλαδή θα βρίσκεται 35 μονάδες πιο
πάνω από το ποσοστό του 2007, επισημαίνει το ΔΝΤ στην έκθεσή του. Με μια
αναλογία χρέους/ΑΕΠ της τάξης του 242,3% που προβλέπεται να έχει το 2014,
η Ιαπωνία αναμένεται να βρίσκεται πρώτη στον κατάλογο των υπερχρεωμένων
ανεπτυγμένων χωρών, ακολουθούμενη από την Ελλάδα (174%), την Ιταλία (133,1%)
και την Πορτογαλία (125,3%). Οι ΗΠΑ, οι οποίες έχουν παραλύσει από ένα δημοσιονομικό
αδιέξοδο και απειλούνται από μια πιθανή στάση πληρωμών, θα δουν το χρέος
τους να ανεβαίνει στο 107,3% του ΑΕΠ τους το 2014, δηλαδή θα βρίσκονται πολύ
πιο μπροστά από την Γαλλία και το 94,8% στο οποίο αναμένεται ότι θα ανέρχεται
την ερχόμενη χρονιά το χρέος της. Η δεύτερη οικονομική δύναμη του κόσμου,
η Κίνα δίνει την εικόνα του καλού μαθητή με μια αναλογία χρέους/ΑΕΠ μόνον
20,9% την ερχόμενη χρονιά, σύμφωνα με το ΔΝΤ. ""Παρά τις προόδους στη μείωση
των ελλειμμάτων, οι δημοσιονομικές αδυναμίες παραμένουν βαθιές στις ανεπτυγμένες
χώρες"", επισημαίνεται στην έκθεση. Απέναντι σε αυτές τις ανισορροπίες,
το ΔΝΤ εκφράζει την ανησυχία του καθώς βλέπει ""ένα φορολογικό σύστημα
υπό πίεση"", το οποίο ευνοεί τον ανταγωνισμό μεταξύ των κρατών και επιτρέπει
στους εύπορους φορολογούμενους και στις πολυεθνικές να ελαφρύνουν τους φόρους
τους. Μόνον στις ΗΠΑ, το ΔΝΤ υπολογίζει σε 60 δισεκατομμύρια δολάρια τα έσοδα
που φέρεται ότι χάνονται λόγω τεχνικών βελτιστοποίησης της φορολογίας των
πολυεθνικών. Το ΔΝΤ επισημαίνει ότι οι τελευταίες δεκαετίες έχουν σηματοδοτηθεί
από μια ""θεαματική άνοδο"" του πλούτου του ""1%"" των πιο πλούσιων, κυρίως
στον αγγλοσαξονικό κόσμο, χωρίς ωστόσο η φορολογία να έχει προσαρμοστεί
σε αυτήν την εξέλιξη. ""Σε πολλές χώρες θα ήταν πιθανό να επιβληθούν επιπλέον
φόροι σε αυτούς που διαθέτουν τα πιο υψηλά εισοδήματα"", υπογραμμίζει το
ΔΝΤ, το οποίο κρίνει εξάλλου ""συνετό"" τον υπολογισμό σε 4.500 δισεκατομμύρια
δολάρια των διαθεσίμων που αποκρύπτονται από ιδιώτες σε φορολογικούς παραδείσους.
Οι χώρες της Ομάδας των Είκοσι (G20), οι υπουργοί Οικονομικών των οποίων
συναντώνται αυτήν την εβδομάδα στην Ουάσινγκτον, ξεκίνησαν πρόσφατα πρωτοβουλίες
για την πάταξη της φοροδιαφυγής.
example_title: Economics
model-index:
- name: IMISLab/GreekT5-umt5-base-greeksum
results:
- task:
type: summarization
name: Summarization
dataset:
name: GreekSUM
type: greeksum
config: default
split: test
metrics:
- type: rouge
value: 26.67
name: ROUGE-1
verified: true
- type: rouge
value: 13.0
name: ROUGE-2
verified: true
- type: rouge
value: 22.42
name: ROUGE-L
verified: true
- type: bertscore
value: 73.41
name: BERTScore
verified: true
---
# GreekT5 (umt5-base-greeksum)
A Greek news summarization model trained on [GreekSum](https://github.com/iakovosevdaimon/GreekSUM).
This model is part of a series of models trained as part of our research paper:
[Giarelis, N., Mastrokostas, C., & Karacapilidis, N. (2024) GreekT5: Sequence-to-Sequence Models for Greek News Summarization](https://link.springer.com/chapter/10.1007/978-3-031-63215-0_5) [\[arxiv\]](https://arxiv.org/abs/2311.07767)
The proposed models were trained and evaluated on the same dataset against [GreekBART](https://arxiv.org/abs/2304.00869).
For more information see the evaluation section below.
## Training dataset
The training dataset of `GreekT5-umt5-base-greeksum` is [GreekSum](https://github.com/iakovosevdaimon/GreekSUM/), which is the first news summarization dataset for the Greek Language.
This dataset contains ~151,000 news articles collected from [News24/7](https://www.news247.gr/), belonging to various topics (i.e., society, politics, economy, culture or world news).
For more information see: [https://arxiv.org/abs/2304.00869](https://arxiv.org/abs/2304.00869)
## Training configuration
We trained `google/umt5-base` [580 million parameters (~2.37 GB)] on the GreekSUM train split using the following parameters:
* GPU batch size = 1
* Total training epochs = 10
* AdamW optimizer (e = 1e−8, β1 = 0.9 and β2 = 0.0999)
* Learning rate = 3e−4
* No warmup steps
* 32-bit floating precision
* Tokenization
* maximum input token length = 1024
* maximum output token length = 128
* padding = ‘max_length’
* truncation = True
**Note:** T5-based models use a multi-task architecture, the prefix *‘summarize: ’* was prepended in each training sample.
## Evaluation
**Approach**|**ROUGE-1**|**ROUGE-2**|**ROUGE-L**|**BERTScore**
------------|-----------|-----------|-----------|-------------
TextRank|18.10|5.76|13.84|68.39
GreekT5 (mt5-small)|14.84|1.68|12.39|72.96
GreekT5 (umt5-small)|25.49|12.03|21.32|72.86
**GreekT5 (umt5-base)**|**26.67**|**13.00**|**22.42**|73.41
GreekBART|17.43|2.44|15.08|**75.89**
### Example code
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
model_name = 'IMISLab/GreekT5-umt5-base-greeksum'
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
summarizer = pipeline(
'summarization',
device = 'cpu',
model = model,
tokenizer = tokenizer,
max_new_tokens = 128,
truncation = True
)
text = 'Να πάρει ""ξεκάθαρη"" θέση σε σχέση με τον κίνδυνο μετάδοσης του κορονοϊού από τη Θεία Κοινωνία καλεί την κυβέρνηση και τον Πρωθυπουργό με ανακοίνωσή του τη Δευτέρα ο ΣΥΡΙΖΑ. ""Την ώρα που κλείνουν προληπτικά και ορθώς σχολεία, πανεπιστήμια, γήπεδα και λαμβάνονται ειδικά μέτρα ακόμη και για την ορκωμοσία της νέας Προέδρου της Δημοκρατίας, η Ιερά Σύνοδος της Εκκλησίας της Ελλάδος επιμένει ότι το μυστήριο της Θείας Κοινωνίας δεν εγκυμονεί κινδύνους μετάδοσης του κορονοϊού, καλώντας όμως τις ευπαθείς ομάδες να μείνουν σπίτι τους"", αναφέρει η αξιωματική αντιπολίτευση και συνεχίζει: ""Ωστόσο το πρόβλημα δεν είναι τι λέει η Ιερά Σύνοδος, αλλά τι λέει η Πολιτεία και συγκεκριμένα ο ΕΟΔΥ και το Υπουργείο Υγείας, που έχουν και την αποκλειστική κοινωνική ευθύνη για τη μη εξάπλωση του ιού και την προστασία των πολιτών"". ""Σε άλλες ευρωπαϊκές χώρες με εξίσου μεγάλο σεβασμό στη Χριστιανική πίστη και στο θρησκευτικό συναίσθημα, τα μυστήρια της Εκκλησίας είτε αναστέλλονται είτε τροποποιούν το τελετουργικό τους. Μόνο στη χώρα μας έχουμε το θλιβερό προνόμιο μιας πολιτείας που δεν τολμά να πει το αυτονόητο"", προσθέτει, τονίζοντας ότι ""η κυβέρνηση λοιπόν και το Υπουργείο Υγείας οφείλουν να πάρουν δημόσια μια ξεκάθαρη θέση και να μην θυσιάζουν τη δημόσια Υγεία στο βωμό του πολιτικού κόστους"". ""Συμφωνούν ότι η Θεία Κοινωνία δεν εγκυμονεί κινδύνους μετάδοσης του κορονοϊού; Δεν είναι θέμα ευσέβειας αλλά κοινωνικής ευθύνης. Και με τη Δημόσια υγεία δεν μπορούμε να παίζουμε"", καταλήγει η ανακοίνωση του γραφείου Τύπου του ΣΥΡΙΖΑ. *ΠΩΣ ΜΕΤΑΔΙΔΕΤΑΙ. Χρήσιμος οδηγός για να προστατευθείτε από τον κορονοϊό *ΤΑ ΝΟΣΟΚΟΜΕΙΑ ΑΝΑΦΟΡΑΣ. Ποια θα υποδέχονται τα κρούσματα κορονοϊού στην Ελλάδα. *ΤΑΞΙΔΙΑ. Κορονοϊός και αεροδρόμια: Τι να προσέξετε. *Η ΕΠΙΔΗΜΙΑ ΣΤΟΝ ΠΛΑΝΗΤΗ. Δείτε LIVE χάρτη με την εξέλιξη του κορονοϊού.'
output = summarizer('summarize: ' + text)
print(output[0]['summary_text'])
```
## Contact
If you have any questions/feedback about the model please e-mail one of the following authors:
```
[email protected]
[email protected]
[email protected]
```
## Citation
The model has been officially released with the article: [GreekT5: A Series of Greek Sequence-to-Sequence Models for News Summarization](https://arxiv.org/).
If you use the model, please cite the following:
```
@inproceedings{giarelis2024greekt5,
title={GreekT5: Sequence-to-Sequence Models for Greek News Summarization},
author={Giarelis, Nikolaos and Mastrokostas, Charalampos and Karacapilidis, Nikos},
booktitle={IFIP International Conference on Artificial Intelligence Applications and Innovations},
pages={60--73},
year={2024},
organization={Springer}
}
```
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
skywood/NHNDQ-nllb-finetuned-en2ko-ct2-float16
|
skywood
|
translation
|
[
"transformers",
"translation",
"en",
"ko",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | 1,712,473,932,000 | 2024-04-08T11:50:57 | 79 | 1 |
---
language:
- en
- ko
license: cc-by-4.0
tags:
- translation
---
I only did ctranslate2 convert to the original.
cmd> ct2-transformers-converter --model NHNDQ/nllb-finetuned-en2ko --quantization float16 --output_dir NHNDQ-nllb-finetuned-en2ko-ct2
All copyrights belong to the original authors and the CT model may be deleted upon request. Below is the original model information.
Original URL : https://huggingface.co/NHNDQ/nllb-finetuned-en2ko
## Model Details
* Model Description: Fine-tuned facebook/nllb-200-distilled-600M ct2 model
* Developed by: DanielHeo
*
## Original Model Details
* Model Description: Fine-tuned facebook/nllb-200-distilled-600M model
* Developed by: Jisu Kim, Juhwan Lee, TakSung Heo, and Minsu Jeong
* Model Type: Translation
* Language(s):
* Source Language: English
* Target Language: Korean
* License: CC-BY-4.0
## Dataset
* [AI-hub dataset](https://www.aihub.or.kr/)
## BLEU Score
* Deepl translation: 22.83
* Fine-tune nllb: 33.66
## Uses
This model can be used for translation and text-to-text generation
## Data Augmentation with backtranslation task
You can exercise korean data augmentation task with python package [KoTAN](https://github.com/KoJLabs/KoTAN/tree/main)
|
[
"TRANSLATION"
] |
Non_BioNLP
|
XSY/t5-small-finetuned-xsum
|
XSY
|
text2text-generation
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2021-11-09T13:40:46 | 123 | 0 |
---
{}
---
这个模型是根据这个一步一步完成的,如果想自己微调,请参考https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/summarization.ipynb
This model is completed step by step according to this, if you want to fine-tune yourself, please refer to https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/summarization.ipynb
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.6901
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4500
- Rouge1: 28.6901
- Rouge2: 8.0102
- Rougel: 22.6087
- Rougelsum: 22.6105
- Gen Len: 18.824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.6799 | 1.0 | 25506 | 2.4500 | 28.6901 | 8.0102 | 22.6087 | 22.6105 | 18.824 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
tamilnlpSLIIT/whisper-ta
|
tamilnlpSLIIT
|
automatic-speech-recognition
|
[
"transformers",
"pytorch",
"jax",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"ta",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | 1,716,137,172,000 | 2024-05-19T16:46:12 | 7 | 0 |
---
language:
- ta
license: apache-2.0
metrics:
- wer
tags:
- whisper-event
model-index:
- name: Whisper Tamil Medium - Vasista Sai Lodagala
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: google/fleurs
type: google/fleurs
config: ta_in
split: test
metrics:
- type: wer
value: 6.97
name: WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: mozilla-foundation/common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: ta
split: test
metrics:
- type: wer
value: 6.5
name: WER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tamil Medium
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Tamil data available from multiple publicly available ASR corpuses.
It has been fine-tuned as a part of the Whisper fine-tuning sprint.
**NOTE:** The code used to train this model is available for re-use in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository.
## Usage
In order to evaluate this model on an entire dataset, the evaluation codes available in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository can be used.
The same repository also provides the scripts for faster inference using whisper-jax.
In order to infer a single audio file using this model, the following code snippet can be used:
```python
>>> import torch
>>> from transformers import pipeline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> transcribe = pipeline(task="automatic-speech-recognition", model="vasista22/whisper-tamil-medium", chunk_length_s=30, device=device)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="ta", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
For faster inference of whisper models, the [whisper-jax](https://github.com/sanchit-gandhi/whisper-jax) library can be used. Please follow the necessary installation steps as mentioned [here](https://github.com/vasistalodagala/whisper-finetune#faster-evaluation-with-whisper-jax), before using the following code snippet:
```python
>>> import jax.numpy as jnp
>>> from whisper_jax import FlaxWhisperForConditionalGeneration, FlaxWhisperPipline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> transcribe = FlaxWhisperPipline("vasista22/whisper-tamil-medium", batch_size=16)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="ta", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
## Training and evaluation data
Training Data:
- [IISc-MILE Tamil ASR Corpus](https://www.openslr.org/127/)
- [ULCA ASR Corpus](https://github.com/Open-Speech-EkStep/ULCA-asr-dataset-corpus#tamil-labelled--total-duration-is-116024-hours)
- [Shrutilipi ASR Corpus](https://ai4bharat.org/shrutilipi)
- [Microsoft Speech Corpus (Indian Languages)](https://msropendata.com/datasets/7230b4b1-912d-400e-be58-f84e0512985e)
- [Google/Fleurs Train+Dev set](https://huggingface.co/datasets/google/fleurs)
- Babel ASR Corpus
Evaluation Data:
- [Microsoft Speech Corpus (Indian Languages) Test Set](https://msropendata.com/datasets/7230b4b1-912d-400e-be58-f84e0512985e)
- [Google/Fleurs Test Set](https://huggingface.co/datasets/google/fleurs)
- [IISc-MILE Test Set](https://www.openslr.org/127/)
- Babel Test Set
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 48
- seed: 22
- optimizer: adamw_bnb_8bit
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 17500
- training_steps: 33892 (Initially set to 84730 steps)
- mixed_precision_training: True
## Acknowledgement
This work was done at [Speech Lab, IIT Madras](https://asr.iitm.ac.in/).
The compute resources for this work were funded by "Bhashini: National Language translation Mission" project of the Ministry of Electronics and Information Technology (MeitY), Government of India.
|
[
"TRANSLATION"
] |
Non_BioNLP
|
SERMAS/LLaMa-3-emotions-gestures-gguf
|
SERMAS
|
text-generation
|
[
"transformers",
"llama",
"text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,718,377,158,000 | 2024-07-12T13:56:14 | 9 | 0 |
---
language:
- en
license: apache-2.0
pipeline_tag: text-generation
---
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
This model requires instructions. Following is an example input sequence:
```
You are a virtual agent specializing in postal services, insurance and reception. Your job is to guide customers through the process of parcel shipping,
answer their questions about insurance or register them, open the turnstile and tell them where to find their meeting room. To do this, you need to
understand the customers' intentions and the information they provide in their uttrances in order to answer them in a helpful and friendly manner.
###Instruction
Consider the following conversation between you and a customer. Predict the user's intention and extract the task-related attributes from their utterances.
Generate your next answer, also considering the knowledge below. Return the results line by line. Here is an example:
User Intention:
Parcel Choice
Attributes:
Weight: 10kg
Destination: London, UK
Virtual Agent:
If your item weighs only 10kg, I recommend to use our medium-sized box.
For user intention, the following values are possible: Greeting,Parcel Choice, Recharge Phone, Building Access, Question Answering.
For Attributes, the following values are possible: Outcome Operation, Bill Form Payment Procedure, Import Payment, Destination, Type of Bills, Host Name,
Confirmation to Open the Turnstile, Delivery Option, Ticket Number, Verification Call, Weight, Phone Number, Meeting Date and Time, Bill Form Name, Shipping
Box Description, Host Email, Shipping Procedure, Meeting Room Identifier, Guest Name, Confirmation to Open Turnstile, Phone Provider, Package Required,
Alternative Host Email, Bill Form Description, Question, Type of Service, Alternative Host Name, Shipping Box Name, Shipping Time, Evidence.
###Knowledge
[knowledge document if available]
###Conversation
[dialogue history]
[emotion if available]
[gesture]
###Response
User Intention:
```
Please replace [knowledge document if available] with the knowledge document or an empty string and [dialogue history] with the dialogue context, e.g.:
```
Customer: Hi there!
Virtual Agent: Hello! How can I assist you today?
Customer: I just adopted a cat and I'm interested in getting insurance coverage for accidents and illnesses. Which document should I refer to for information on this?
```
Replace [emotion] with the user emotion, e.g., "The user is curious.". [gesture] should be replaced with "The user waits for a response from the virtual agent." as a default
value.
This is an example for the expected output:
```
###Response
User Intention:
Question_answering
Attributes:
Question: I just adopted a cat and I'm interested in getting insurance coverage for accidents and illnesses. Which document should I refer to for information on this
Virtual Agent:
You might want to check document_0, which outlines our coverage and assistance services in case of accidents or illnesses suffered by the Animal."
```
|
[
"QUESTION_ANSWERING"
] |
Non_BioNLP
|
fine-tuned/jinaai_jina-embeddings-v2-base-en-6162024-xxse-webapp
|
fine-tuned
|
feature-extraction
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Query",
"Document",
"Retrieval",
"Description",
"JSON",
"custom_code",
"en",
"dataset:fine-tuned/jinaai_jina-embeddings-v2-base-en-6162024-xxse-webapp",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,718,545,202,000 | 2024-06-16T13:40:17 | 5 | 0 |
---
datasets:
- fine-tuned/jinaai_jina-embeddings-v2-base-en-6162024-xxse-webapp
- allenai/c4
language:
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Query
- Document
- Retrieval
- Description
- JSON
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
general domain
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/jinaai_jina-embeddings-v2-base-en-6162024-xxse-webapp',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
HooshvareLab/bert-fa-base-uncased-clf-persiannews
|
HooshvareLab
|
text-classification
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2021-05-18T20:51:07 | 2,153 | 8 |
---
language: fa
license: apache-2.0
---
# ParsBERT (v2.0)
A Transformer-based Model for Persian Language Understanding
We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes!
Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) repo for the latest information about previous and current models.
## Persian Text Classification [DigiMag, Persian News]
The task target is labeling texts in a supervised manner in both existing datasets `DigiMag` and `Persian News`.
### Persian News
A dataset of various news articles scraped from different online news agencies' websites. The total number of articles is 16,438, spread over eight different classes.
1. Economic
2. International
3. Political
4. Science Technology
5. Cultural Art
6. Sport
7. Medical
| Label | # |
|:------------------:|:----:|
| Social | 2170 |
| Economic | 1564 |
| International | 1975 |
| Political | 2269 |
| Science Technology | 2436 |
| Cultural Art | 2558 |
| Sport | 1381 |
| Medical | 2085 |
**Download**
You can download the dataset from [here](https://drive.google.com/uc?id=1B6xotfXCcW9xS1mYSBQos7OCg0ratzKC)
## Results
The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures.
| Dataset | ParsBERT v2 | ParsBERT v1 | mBERT |
|:-----------------:|:-----------:|:-----------:|:-----:|
| Persian News | 97.44* | 97.19 | 95.79 |
## How to use :hugs:
| Task | Notebook |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Text Classification | [](https://colab.research.google.com/github/hooshvare/parsbert/blob/master/notebooks/Taaghche_Sentiment_Analysis.ipynb) |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ParsBERT Issues](https://github.com/hooshvare/parsbert/issues) repo.
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
Unbabel/wmt20-comet-qe-da-v2-marian
|
Unbabel
|
translation
|
[
"translation",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"license:apache-2.0",
"region:us"
] | 1,716,891,530,000 | 2024-05-28T10:45:42 | 0 | 0 |
---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: apache-2.0
pipeline_tag: translation
---
Marian version of [wmt20-comet-qe-da-v2](https://huggingface.co/Unbabel/wmt20-comet-qe-da-v2).
Credits to Microsoft Translate Team!
# Paper
TBA
# License
Apache-2.0
# Usage
TBA
# Intended uses
Our model is intented to be used for **MT evaluation**.
Given a a triplet with (source sentence, translation, reference translation) outputs a single score between 0 and 1 where 1 represents a perfect translation.
# Languages Covered:
This model builds on top of XLM-R which cover the following languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Azerbaijani, Basque, Belarusian, Bengali, Bengali Romanized, Bosnian, Breton, Bulgarian, Burmese, Burmese, Catalan, Chinese (Simplified), Chinese (Traditional), Croatian, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Hausa, Hebrew, Hindi, Hindi Romanized, Hungarian, Icelandic, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish (Kurmanji), Kyrgyz, Lao, Latin, Latvian, Lithuanian, Macedonian, Malagasy, Malay, Malayalam, Marathi, Mongolian, Nepali, Norwegian, Oriya, Oromo, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Sanskri, Scottish, Gaelic, Serbian, Sindhi, Sinhala, Slovak, Slovenian, Somali, Spanish, Sundanese, Swahili, Swedish, Tamil, Tamil Romanized, Telugu, Telugu Romanized, Thai, Turkish, Ukrainian, Urdu, Urdu Romanized, Uyghur, Uzbek, Vietnamese, Welsh, Western, Frisian, Xhosa, Yiddish.
Thus, results for language pairs containing uncovered languages are unreliable!
|
[
"TRANSLATION"
] |
Non_BioNLP
|
antonkurylo/t5-small-billsum
|
antonkurylo
|
summarization
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,729,623,726,000 | 2024-10-23T20:28:36 | 75 | 0 |
---
base_model: t5-small
library_name: transformers
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: t5-small-billsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-billsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9564
- Rouge1: 50.3551
- Rouge2: 29.3717
- Rougel: 39.4102
- Rougelsum: 43.6247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 2.5468 | 1.0 | 1185 | 2.0937 | 48.625 | 27.492 | 37.671 | 41.4628 |
| 2.2867 | 2.0 | 2370 | 2.0155 | 49.2547 | 28.248 | 38.39 | 42.3374 |
| 2.2241 | 3.0 | 3555 | 1.9796 | 49.8802 | 28.8333 | 38.8829 | 43.027 |
| 2.1925 | 4.0 | 4740 | 1.9620 | 50.07 | 28.9961 | 39.1086 | 43.3251 |
| 2.1791 | 5.0 | 5925 | 1.9576 | 50.2626 | 29.1819 | 39.2415 | 43.4781 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
4yo1/llama3-pre1-ds-lora1
|
4yo1
|
translation
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-3-ko",
"translation",
"en",
"ko",
"dataset:recipes",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,721,264,227,000 | 2024-07-18T01:07:19 | 2,088 | 0 |
---
datasets:
- recipes
language:
- en
- ko
library_name: transformers
license: mit
pipeline_tag: translation
tags:
- llama-3-ko
---
### Model Card for Model ID
### Model Details
Model Card: llama3-pre1-ds-lora1 with Fine-Tuning
Model Overview
Model Name: llama3-pre1-ds-lora1
Model Type: Transformer-based Language Model
Model Size: 8 billion parameters
by: 4yo1
Languages: English and Korean
### Model Description
llama3-pre1-ds-lora1 is a language model pre-trained on a diverse corpus of English and Korean texts.
This fine-tuning approach allows the model to adapt to specific tasks or datasets with a minimal number of additional parameters, making it efficient and effective for specialized applications.
### how to use - sample code
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer
config = AutoConfig.from_pretrained("4yo1/llama3-pre1-ds-lora1")
model = AutoModel.from_pretrained("4yo1/llama3-pre1-ds-lora1")
tokenizer = AutoTokenizer.from_pretrained("4yo1/llama3-pre1-ds-lora1")
```
datasets:
- recipes
license: mit
|
[
"TRANSLATION"
] |
Non_BioNLP
|
Helsinki-NLP/opus-mt-vi-fr
|
Helsinki-NLP
|
translation
|
[
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"vi",
"fr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T12:08:36 | 111 | 0 |
---
language:
- vi
- fr
license: apache-2.0
tags:
- translation
---
### vie-fra
* source group: Vietnamese
* target group: French
* OPUS readme: [vie-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-fra/README.md)
* model: transformer-align
* source language(s): vie
* target language(s): fra
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.vie.fra | 34.2 | 0.544 |
### System Info:
- hf_name: vie-fra
- source_languages: vie
- target_languages: fra
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-fra/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['vi', 'fr']
- src_constituents: {'vie', 'vie_Hani'}
- tgt_constituents: {'fra'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/vie-fra/opus-2020-06-17.test.txt
- src_alpha3: vie
- tgt_alpha3: fra
- short_pair: vi-fr
- chrF2_score: 0.544
- bleu: 34.2
- brevity_penalty: 0.955
- ref_len: 11519.0
- src_name: Vietnamese
- tgt_name: French
- train_date: 2020-06-17
- src_alpha2: vi
- tgt_alpha2: fr
- prefer_old: False
- long_pair: vie-fra
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
[
"TRANSLATION"
] |
Non_BioNLP
|
ahearnlr/bert-emotion
|
ahearnlr
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,685,460,179,000 | 2023-05-30T15:30:44 | 13 | 0 |
---
datasets:
- tweet_eval
license: apache-2.0
metrics:
- precision
- recall
tags:
- generated_from_trainer
model-index:
- name: bert-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: precision
value: 0.7505623807659564
name: Precision
- type: recall
value: 0.7243031825553111
name: Recall
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-emotion
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1413
- Precision: 0.7506
- Recall: 0.7243
- Fscore: 0.7340
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.8556 | 1.0 | 815 | 0.7854 | 0.7461 | 0.5929 | 0.6088 |
| 0.5369 | 2.0 | 1630 | 0.9014 | 0.7549 | 0.7278 | 0.7359 |
| 0.2571 | 3.0 | 2445 | 1.1413 | 0.7506 | 0.7243 | 0.7340 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
yam3333/paraphrase-xlm-r-multilingual-v1-finetuned
|
yam3333
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:383",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/paraphrase-xlm-r-multilingual-v1",
"base_model:finetune:sentence-transformers/paraphrase-xlm-r-multilingual-v1",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,731,858,940,000 | 2024-11-17T15:56:43 | 7 | 0 |
---
base_model: sentence-transformers/paraphrase-xlm-r-multilingual-v1
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:383
- loss:CosineSimilarityLoss
widget:
- source_sentence: ब्यवसायसञ्चालन नभएको सिफारिस गर्न सेवा शुल्क तथा दस्तुर कति लाग्छ
sentences:
- <unk>
- <unk>
- <unk>
- source_sentence: स्वास्थ्य संस्था दर्ता गर्न लाग्ने सेवा शुल्क कति ह
sentences:
- <unk>
- <unk>
- <unk>
- source_sentence: अस्थायीबसोबास सिफारिस गर्नको लागी आवश्यक कागजातहरु के के चाहिन्छ
sentences:
- <unk>
- <unk>
- <unk>
- source_sentence: पहिलो पल्ट सम्पत्ति कर तिर्न आवश्यक कागजातहरु के के हुन्
sentences:
- <unk>
- निःशुल्क
- <unk>
- source_sentence: आर्थिक अवस्था बलियो वा सम्पन्नता प्रमाणित गर्न आवश्यक कागजातहरु
के के हुन्
sentences:
- <unk>
- <unk>
- <unk>
---
# SentenceTransformer based on sentence-transformers/paraphrase-xlm-r-multilingual-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-xlm-r-multilingual-v1](https://huggingface.co/sentence-transformers/paraphrase-xlm-r-multilingual-v1). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-xlm-r-multilingual-v1](https://huggingface.co/sentence-transformers/paraphrase-xlm-r-multilingual-v1) <!-- at revision 000e995b707ecea1b901208915ff3533783ec13d -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("yam3333/paraphrase-xlm-r-multilingual-v1-finetuned")
# Run inference
sentences = [
'आर्थिक अवस्था बलियो वा सम्पन्नता प्रमाणित गर्न आवश्यक कागजातहरु के के हुन्',
'<unk>',
'<unk>',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 383 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 383 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 9 tokens</li><li>mean: 17.3 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 3.0 tokens</li><li>max: 3 tokens</li></ul> | <ul><li>min: 1.0</li><li>mean: 1.0</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:--------------------------------------------------------------------------------|:-------------------|:-----------------|
| <code>विज्ञापन कर तिर्न लाग्ने समय कति हो</code> | <code><unk></code> | <code>1.0</code> |
| <code>संरक्षक सिफारिस (संस्थागत) गर्न कति समय लाग्छ</code> | <code><unk></code> | <code>1.0</code> |
| <code>विपन्नविद्यार्थी छात्रबृत्ति सिफारिस गर्नु परेमा सेवा शुल्क कति हो</code> | <code><unk></code> | <code>1.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 4
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.3.0
- Transformers: 4.46.2
- PyTorch: 2.4.0
- Accelerate: 0.34.2
- Datasets: 3.1.0
- Tokenizers: 0.20.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
mahsaBa76/bge-base-custom-matryoshka
|
mahsaBa76
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:278",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,736,278,128,000 | 2025-01-07T19:28:58 | 7 | 0 |
---
base_model: BAAI/bge-base-en-v1.5
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:278
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: How does Bitcoin's P2P network prevent malicious nodes from flooding
the network with invalid blocks or transactions?
sentences:
- 'paper-title: The Bitcoin Lightning Network: Scalable Off-Chain Instant Payments
\subsection*{8.4 Payment Routing}
It is theoretically possible to build a route map implicitly from observing 2
-of-2 multisigs on the blockchain to build a routing table. Note, however, this
is not feasible with pay-to-script-hash transaction outputs, which can be resolved
out-of-band from the bitcoin protocol via a third party routing service. Building
a routing table will become necessary for large operators (e.g. BGP, Cjdns). Eventually,
with optimizations, the network will look a lot like the correspondent banking
network, or Tier-1 ISPs. Similar to how packets still reach their destination
on your home network connection, not all participants need to have a full routing
table. The core Tier-1 routes can be online all the time - while nodes at the
edges, such as average users, would be connected intermittently.
Node discovery can occur along the edges by pre-selecting and offering partial
routes to well-known nodes.
\subsection*{8.5 Fees}
Lightning Network fees, which differ from blockchain fees, are paid directly between
participants within the channel. The fees pay for the time-value of money for
consuming the channel for a determined maximum period of time, and for counterparty
risk of non-communication.
Counterparty risk for fees only exist with one''s direct channel counterparty.
If a node two hops away decides to disconnect and their transaction gets broadcast
on the blockchain, one''s direct counterparties should not broadcast on the blockchain,
but continue to update via novation with a new Commitment Transaction. See the
Decrementing Timelocks entry in the HTLC section for more information about counterparty
risk.
The time-value of fees pays for consuming time (e.g. 3 days) and is conceptually
equivalent to a gold lease rate without custodial risk; it is the time-value for
using up the access to money for a very short duration. Since certain paths may
become very profitable in one direction, it is possible for fees to be negative
to encourage the channel to be available for those profitable paths.
\section*{9 Risks}
The primary risks relate to timelock expiration. Additionally, for core nodes
and possibly some merchants to be able to route funds, the keys must be held online
for lower latency. However, end-users and nodes are able to keep their private
keys firewalled off in cold storage.
\subsection*{9.1 Improper Timelocks}
Participants must choose timelocks with sufficient amounts of time. If insufficient
time is given, it is possible that timelocked transactions believed to be invalid
will become valid, enabling coin theft by the counterparty. There is a trade-off
between longer timelocks and the time-value of money. When writing wallet and
Lightning Network application software, it is necessary to ensure that sufficient
time is given and users are able to have their transactions enter into the blockchain
when interacting with non-cooperative or malicious channel counterparties.
\subsection*{9.2 Forced Expiration Spam}
Forced expiration of many transactions may be the greatest systemic risk when
using the Lightning Network. If a malicious participant creates many channels
and forces them all to expire at once, these may overwhelm block data capacity,
forcing expiration and broadcast to the blockchain. The result would be mass spam
on the bitcoin network. The spam may delay transactions to the point where other
locktimed transactions become valid.
This may be mitigated by permitting one transaction replacement on all pending
transactions. Anti-spam can be used by permitting only one transaction replacement
of a higher sequence number by the inverse of an even or odd number. For example,
if an odd sequence number was broadcast, permit a replacement to a higher even
number only once. Transactions would use the sequence number in an orderly way
to replace other transactions. This mitigates the risk assuming honest miners.
This attack is extremely high risk, as incorrect broadcast of Commitment Transactions
entail a full penalty of all funds in the channel.
Additionally, one may attempt to steal HTLC transactions by forcing a timeout
transaction to go through when it should not. This can be easily mitigated by
having each transfer inside the channel be lower than the total transaction fees
used. Since transactions are extremely cheap and do not hit the blockchain with
cooperative channel counterparties, large transfers of value can be split into
many small transfers. This attempt can only work if the blocks are completely
full for a long time. While it is possible to mitigate it using a longer HTLC
timeout duration, variable block sizes may become common, which may need mitigations.
If this type of transaction becomes the dominant form of transactions which are
included on the blockchain, it may become necessary to increase the block size
and run a variable blocksize structure and timestop flags as described in the
section below. This can create sufficient penalties and disincentives to be highly
unprofitable and unsuccessful for attackers, as attackers lose all their funds
from broadcasting the wrong transaction, to the point where it will never occur.'
- 'paper-title: OmniLedger: A Secure, Scale-Out, Decentralized Ledger via Sharding
Fig. 11: Bootstrap bandwidth consumption with state blocks.\\[0pt]
to create the UTXO state. For this experiment, we reconstructed Bitcoin''s blockchain
[5], [41] and created a parallel OmniLedger blockchain with weekly state blocks.
Figure 11 depicts the bandwidth overhead of a validator that did not follow the
state for the first 100 days. As we can see, the state block approach is better
if the validator is outdated for more than 19 days or 2736 Bitcoin blocks.
The benefit might not seem substantial for Bitcoin, but in OmniLedger, 2736 blocks
are created in less than 8 hours, meaning that for one day-long epochs, the state
block approach is significantly better. If a peak throughput is required and 16
MB blocks are deployed, we expect reduced bandwidth consumption close to two orders
of magnitude.
\section*{IX. Related Work}
The growing interests in scaling blockchains have produced a number of prominent
systems that we compare in Table IV. ByzCoin [32] is a first step to scalable
BFT consensus, but cannot scale-out. Elastico is the first open scale-out DL,
however, it suffers from performance and security challenges that we have already
discussed in Section II. RSCoin [16] proposes sharding as a scalable approach
for centrally banked cryptocurrencies. RSCoin relies on a trusted source of randomness
for sharding and auditing, making its usage problematic in trustless settings.
Furthermore, to validate transactions, each shard has to coordinate with the client
and instead of running BFT, RSCoin uses a simple two-phase commit, assuming that
safety is preserved if the majority of validators is honest. This
TABLE IV: Comparison of Distributed Ledger Systems
\begin{center}
\begin{tabular}{ccccccc}
\hline
System & Scale-Out & \begin{tabular}{c}
Cross-Shard \\
Transaction Atomicity \\
\end{tabular} & State Blocks & \begin{tabular}{c}
Measured Scalability \\
(\# of Validators) \\
\end{tabular} & \begin{tabular}{c}
Estimated \\
Time to Fail \\
\end{tabular} & \begin{tabular}{c}
Measured \\
Latency \\
\end{tabular} \\
\hline
RSCoin [16] & In Permissioned & Partial & No & 30 & N/A & 1 sec \\
Elastico [34] & In PoW & No & No & 1600 & 1 hour & 800 sec \\
ByzCoin [32] & No & N/A & No & 1008 & 19 years & 40 sec \\
Bitcoin-NG [21] & No & N/A & No & 1000 & N/A & 600 sec \\
PBFT [9], [11] & No & N/A & No & 16 & N/A & 1 sec \\
Nakamoto [36] & No & N/A & No & 4000 & N/A & 600 sec \\
OmniLedger & Yes & Yes & Yes & 2400 & 68.5 years & 1.5 sec \\
\hline
\end{tabular}
\end{center}
approach, however, does not protect from double spending attempts by a malicious
client colluding with a validator.
In short, prior solutions [16], [32], [34] achieve only two out of the three desired
properties; decentralization, long-term security, and scale-out, as illustrated
in Figure 1. OmniLedger overcomes this issue by scaling out, as far as throughput
is concerned, and by maintaining consistency to the level required for safety,
without imposing a total order.
Bitcoin-NG scales Bitcoin without changing the consensus algorithm by observing
that the PoW process does not have to be the same as the transaction validation
process; this results in two separate timelines: one slow for PoW and one fast
for transaction validation. Although Bitcoin-NG significantly increases the throughput
of Bitcoin, it is still susceptible to the same attacks as Bitcoin [24], [3].
Other efforts to scale blockchains include: Tendermint [9], a protocol similar
to PBFT for shard-level consensus that does not scale due to its similarities
to PBFT, and the Lightning Network [40], an off-chain payment protocol for Bitcoin
(also compatible to OmniLedger); it limits the amount of information committed
to the blockchain.'
- "Datatype: lecture_note, Title: Lecture 4: Peer to Peer Networking for Blockchains\n\
\nHow does broadcast take only $O(\\log N)$ steps? We first need to understand\
\ the gossip-flooding-based broadcast protocol. The flooding protocol mimics the\
\ spread of an epidemic. Once a node is ``infected\", it infects its peers and\
\ forever stay's infected. It is easy to see that the spread of information will\
\ happen exponentially; hence the information will take $O(\\log N)$ hops to spread\
\ to all nodes. To formally understand the spread, we note that $d$-regular graphs\
\ with $d\\geq 3$ are an \\textit{expander graph} for large sizes ($|V|$) with\
\ high probability. An expander graph is a connected but sparse graph ($|E|=O(|V|)$)\
\ with the following property: $|\\partial A| \\geq \\epsilon|A|$ for any connected\
\ sub-graph $A$ with $|A|<0.5|V|$. Here, $|\\partial A|$ refers to the number\
\ of vertices outside $A$ with at least one neighbor in $A$. A gossip message\
\ originates with $A(0)$ as the broadcasting node with $|A(0)|=1$, in the next\
\ hop, it will spread to $\\partial A(0)$ with $|A(1)|\\geq (1+\\epsilon)|A(0)|$.\
\ This recursion continues and we have $|A(k)|\\geq(1+\\epsilon)^kA(0)$. Thus,\
\ the number of steps to reach half the number of nodes is logarithmic in the\
\ number of nodes. It can be shown that the other half of the nodes can also be\
\ covered in $O(\\log N)$ time.\n\n\n%Engineering issues (peer discovery, bootstrap,\
\ churn). Implementation connections (to the lab experiment). Validation of tx,\
\ blocks. How does that impact networking? What about skipping validation and\
\ doing cut-through routing? Compact blocks. (RR)\n\n\\section*{Bitcoin P2P network:\
\ A systems view}\nIn Bitcoin, peers connect to each other and communicate using\
\ the TCP protocol. The codebase allows for eight outgoing connections and up\
\ to 117 incoming connections. The network has a high churn rate (rate at which\
\ users enter/leave the system); hence, the node must be ready to connect to new\
\ peers. Moreover, to ensure that the peers we are connecting to are chosen randomly,\
\ the node keeps a large list of nodes running Bitcoin in the form of their (IP,\
\ port) tuple and establishes a connection to one of them randomly when a slot\
\ opens up. \n\nHow does a node bootstrap its list of peers? This happens by\
\ connecting to a set of DNS seed nodes. The seed nodes are not heavily decentralized;\
\ hence completely relying on the peer list provided by them is not advisable.\
\ On connecting to the initial set of peers, a node asks its neighbors for their\
\ peer list using {\\tt getAddr} and {\\tt Addr} messages. The node keeps refreshing\
\ its peer list regularly by exchanging peer lists with its peers. \n\nTransmission\
\ of all block and transactions happen through the inventory message {\\tt inv},\
\ on receiving an {\\tt inv} message the node checks if it has the block or the\
\ transaction in its local storage. If not, it sends the {\\tt getData} message\
\ to fetch those blocks and transactions from the peer. Since block sizes are\
\ relatively large, block transmission can optionally happen in 2 stages. On receiving\
\ the {\\tt inv} message, the node may ask for headers first using {\\tt getHeaders}\
\ and ask for complete blocks only if a header chain is established. This header-first\
\ block transmission increases queries but can decrease the net bandwidth usage.\
\ It may also prevent nodes from accepting PoW invalid blocks since the node can\
\ check from the header whether PoW is valid. \n\nWe saw in the previous lecture\
\ that some nodes might be malicious. A question that may arise is: what stops\
\ malicious nodes from flooding the network with invalid blocks and transactions\
\ (i.e., with invalid PoW and/or signatures)? Such flooding will saturate the\
\ network and increase transmission delay to unacceptable levels. Such an attack\
\ is prevented by a simple design decision, forward message to peers only after\
\ validating the message; i.e., a node sends an {\\tt inv} block message to its\
\ peers only after validating the block. If the adversary creates an invalid block,\
\ the block will not be propagated beyond one honest node. Additionally, nodes\
\ maintain their peers' reputation using some predefined heuristics; if a peer\
\ misbehaves (say by sending a transaction with invalid signatures), its reputation\
\ is downgraded and after a certain lower threshold is disconnected."
- source_sentence: How does the blockchain protocol ensure that all honest players
converge on the same chain?
sentences:
- "paper-title: Blockchain CAP Theorem Allows User-Dependent Adaptivity and Finality\n\
\nDefinition 3 (Potential starting value for period $p$ ). A value $v$ that has\
\ been next-voted by $t+1$ honest nodes for period $p-1$.\n\nDefinition 4 (Committed\
\ value for period $p$ ). A value $v$ that has been cert-voted by $2 t+1$ nodes\
\ for period $p$.\n\nDefinition 5 (Potentially committed value for period $p$\
\ ). A value $v$ that has been cert-voted by $t+1$ honest nodes for period $p$.\n\
\nAlthough we slightly altered Algorand BA protocol (which is highlighted in red\
\ in Appendix A), we note that our modification does not break the safety of the\
\ protocol or cause any deadlock in Lemma 1 and Lemma 2, At a high level, the\
\ validity check only causes less soft-votes from honest nodes, which is indistinguishable\
\ with the case where the leader is malicious and no value receives at least $2\
\ t+1$ soft-votes in some period. Therefore, the safety and deadlock-free property\
\ remain.\n\nLemma 1 (Asynchronous Safety, CP0). Even when the network is partitioned,\
\ the protocol ensures safety of the system so that no two honest nodes will finish\
\ one iteration of the protocol with different outputs.\n\nProof. The following\
\ properties hold even during a network partition.\n\n\\begin{itemize}\n \\item\
\ By quorum intersection, as each honest node only soft-votes one value, then\
\ at most one value is committed or potentially committed for each period $p$\
\ in one iteration.\n \\item If a value $v$ is potentially committed for period\
\ $p$, then only $v$ can receive $2 t+1$ next-votes for period $p$. Thus, the\
\ unique potential starting value for period $p+1$ is $v$.\n \\item If a period\
\ $p$ has a unique potential starting value $v \\neq \\perp$, then only $v$ can\
\ be committed for period $p$. Moreover, honest nodes will only next-vote $v$\
\ for period $p$, so the unique potential starting value for period $p+1$ is also\
\ $v$. Inductively, any future periods $p^{\\prime}>p$ can only have $v$ as a\
\ potential starting value. Thus, once a value is potentially committed, it becomes\
\ the unique value that can be committed or potentially committed for any future\
\ period, and no two honest nodes will finish this iteration of the protocol with\
\ different outputs.\n\\end{itemize}\n\nLemma 2 (Asynchronous Deadlock-freedom).\
\ As long as messages will be delivered eventually, an honest node can always\
\ leave period p, either by entering a higher period or meeting the halting condition\
\ for the current iteration.\n\nProof. We first prove that there can never exist\
\ $2 t+1$ next-votes for two different non- $\\perp$ values from the same period\
\ $p$ by induction.\n\nStart with $p=1$. Note that every honest node sets $s t_{i}^{1}=\\\
perp$ and at most one value (say $v$ ) could receive more than $2 t+1$ soft-votes.\
\ Therefore only value $v$ and $\\perp$ could potentially receive more than $2\
\ t+1$ next-votes in period 1 . Note that it is possible that both $v$ and $\\\
perp$ receive more than $2 t+1$ next-votes: all the honest nodes could next-vote\
\ for $\\perp$ in Step 4 and then next-vote for $v$ in Step 5 after seeing the\
\ $2 t+1$ soft-votes for $v$.\n\nAssume that the claim holds for period $p-1(p\
\ \\geq 2)$ : there exist at most two values each of which has $2 t+1$ next-votes\
\ for period $p-1$, and one of them is necessarily $\\perp$. Then there are three\
\ possible cases:"
- 'paper-title: A Scalable Proof-of-Stake Blockchain in the Open Setting * \\ (or,
How to Mimic Nakamoto''s Design via Proof-of-Stake)
Common prefix. Our analysis is based on the common prefix analysis of core-chain.
The core-chain can achieve common prefix as we discussed. The opportunity for
malicious players to destroy common prefix probability is to generate different
blockchain for the same core-chain. For the malicious players can sign different
blocks for one block-core, this will allow him to fork the blockchain. So the
malicious players can fork the blockchain when they are chosen to generate block.
However, with the property of hash function, the malicious players can not generate
two blocks with same hash value. When an honest player is chosen to extend a block,
he will only support one blockchain. Then all of the honest players will converge
on one blockchain.\\
Corollary 6.4 (Common prefix). Consider the blockchain protocol $\Pi^{\text {main
}}$. Consider $\alpha^{\star}=\lambda \beta^{\star}$, $\lambda>1$, and $\delta>0$.
Consider two honest PoS-players, P in round $r$ and $\mathrm{P}^{\prime}$ in round
$r^{\prime}$, with the local best PoS blockchains $\tilde{\mathcal{C}}, \tilde{\mathcal{C}}^{\prime}$,
respectively, where $r^{\prime} \geq r$. Then we have $\operatorname{Pr}\left[\tilde{\mathcal{C}}[1,
\ell] \preceq \tilde{\mathcal{C}}^{\prime}\right] \geq 1-e^{-\Omega(\kappa)}$,
where $\ell=\operatorname{len}(\mathcal{C})-\Theta(\kappa)$.
Proof. As we discussed, $\tilde{\mathcal{C}}$ and $\tilde{\mathcal{C}}^{\prime}$
are associated with core-chains $\mathcal{C}$ and $\mathcal{C}^{\prime}$ respectively.
From Corollary 5.6 we know that $\operatorname{Pr}\left[\mathcal{C}[1, \ell] \preceq
\mathcal{C}^{\prime}\right] \geq 1-e^{-\Omega(\kappa)}$.
Based on the assumption that $\alpha^{\star}=\lambda \beta^{\star}$ and $\lambda>1$,
we can have that the malicious players are not able to generate more than $\Theta(\kappa)$
blocks before an honest player is chosen to generate block with high probability.
All of the honest players will converge on the same chain. Put them together,
we have $\operatorname{Pr}\left[\tilde{\mathcal{C}}[1, \ell] \preceq \tilde{\mathcal{C}}^{\prime}\right]
\geq 1-e^{-\Omega(\kappa)}$ where $\ell=\operatorname{len}(\mathcal{C})-\Theta(\kappa)$.
Chain soundness. A new player will accept a blockchain (in which the corresponding
corechain is included). The proof idea for achieving chain soundness property
of our blockchain protocol directly follows that for the core-chain protocol.
We have the following statement.\\
Corollary 6.5 (Chain soundness). Consider the blockchain protocol $\Pi^{\text
{main }}$. Consider for every round, $\alpha=\lambda \beta, \lambda>1$, and $\delta>0$.
There are two honest PoS-players, $\mathrm{P}^{\prime}$ and $\mathrm{P}^{\prime
\prime}$ in round $r$, with the local best PoS blockchains $\tilde{\mathcal{C}}^{\prime}$
and $\tilde{\mathcal{C}}^{\prime \prime}$, respectively. Let $\mathrm{P}^{\prime}$
be a new player and $\mathrm{P}^{\prime \prime}$ be an existing player in round
$r$. Then we have $\tilde{\mathcal{C}}^{\prime}[\neg \kappa] \preceq \tilde{\mathcal{C}}^{\prime
\prime}$ and $\tilde{\mathcal{C}}^{\prime \prime}[\neg \kappa] \preceq \tilde{\mathcal{C}}^{\prime}$.'
- "Datatype: lecture_note, Title: Lecture 9: Scaling Latency\n\n\\begin{figure}\n\
\\begin{center}\n\\includegraphics[width=\\textwidth]{figures/Prism_main.pdf}\n\
\\end{center}\n\n\\caption{Factorizing the blocks into three types of blocks:\
\ proposer blocks, transaction blocks and voter blocks.}\n\\label{fig:prism}\n\
\n\\end{figure}\n\nJust as in {\\sf Prism 1.0}, the \\textit{proposer} blocktree\
\ in {\\sf Prism} anchors the blockchain. Each proposer block contains a list\
\ of reference links to \\textit{transaction} blocks that contain transactions,\
\ as well as a single reference to a parent proposer block. Honest nodes mine\
\ proposer blocks following the longest chain rule in the proposer tree.\nWe define\
\ the *level* of a proposer block as its distance from the genesis proposer block,\
\ and the *height* of the proposer tree as the maximum level that contains any\
\ proposer blocks. To determine the ordering of proposer blocks (and thus transaction\
\ blocks and transactions), we elect one \\textit{leader} proposer block from\
\ each level. The sequence of leader blocks up to the height of the proposer tree\
\ is called the \\textit{leader sequence}, and is determined by the *voter* chains.\
\ Note that the leader blocks do not need to follow the chain structure of the\
\ proposer blocks because otherwise deadlock may occur if conflicting blocks (i.e.,\
\ two proposer blocks not on one chain) are determined as leader blocks. \n\n\
In {\\sf Prism}, there are $m$ voter chains, where $m \\gg 1$ is a fixed parameter\
\ chosen by the system designer. The larger the $m$, the more parallel the voting\
\ process and hence the shorter the latency of confirmation. In general $m$ is\
\ chosen as large as network bandwidth and memory management issues are manageable.\
\ For example, $m=1000$ is chosen in the \\href{https://arxiv.org/pdf/1909.11261.pdf}{full-stack\
\ implementation} of Prism. New voter blocks are mined on each voter chain according\
\ to the longest chain rule. A voter block votes for a proposer block by containing\
\ a reference link to that proposer block, with the requirements that: (1) a vote\
\ is valid only if the voter block is in the longest chain of its voter tree;\
\ (2) each voter chain votes for one and only one proposer block at each level;\
\ (3) each voter block votes for all the proposer levels that have not been voted\
\ by its parent. The leader block at each level is the one that has the largest\
\ number of votes among all the proposer blocks at the same level (ties can be\
\ broken by the hash of the proposer blocks). The elected leader blocks then provide\
\ a unique ordering of the transaction blocks to form the final ledger. \n\n{\\\
sf Prism} also uses cryptographic sortition to prevent the adversary from focusing\
\ its mining power on a specific type of blocks or on a specific voter chain.\
\ A miner first forms a ``superblock\" containing $m+2$ parts: a transaction block,\
\ a proposer block and a voter block on the $i$-th voter tree ($1\\leq i \\leq\
\ m$). We say a superblock is successfully mined if \n\\begin{equation}\n \
\ Hash({\\sf nonce}, {\\sf superblock}) < T_{\\rm tx} + T_{\\rm prop} + m T_{\\\
rm v}. \n\\label{eq:sortition}\n\\end{equation}\nFurther, every successfully mined\
\ superblock is identified as a transaction block, a proposer block or a voter\
\ block based on the hash output: \n\n\n* identify the superblock as a proposer\
\ block if the hash output is less than $T_{\\rm prop}$;\n* identify the superblock\
\ as a transaction block if the hash output is in the range $[T_{\\rm prop}, T_{\\\
rm tx} + T_{\\rm prop})$;\n* identify the superblock as a voter block on the\
\ $i$-th voter tree ($1\\leq i \\leq m$) if the hash output is in the range $[T_{\\\
rm tx} + T_{\\rm prop} + (i-1) T_{\\rm v}, T_{\\rm tx} + T_{\\rm prop} + i T_{\\\
rm v} )$;"
- source_sentence: What is the role of the 2/3-GHOST function in the GRANDPA finality
gadget?
sentences:
- 'paper-title: GRANDPA: a Byzantine Finality Gadget
\subsection*{2.3 Preliminaries}
Network model : We will be using the partially synchronous network model introduced
by 7] and in particular the gossip network variant used in [5]. We assume that
any message sent or received by an honest participant reaches all honest participants
within time $T$, but possibly only after some Global Synchronisation Time GST.
Concretely, any message sent or received by some honest participant at time $t$
is received by all honest participants by time GST $+T$ at the latest.
Voters: For each voting step, there is a set of $n$ voters. We will frequently
need to assume that for each such step, at most $f<n / 3$ voters are Byzantine.
We need $n-f$ of voters to agree on finality. Whether or not block producers ever
vote, they will need to be participants who track the state of the protocol.
Votes: A vote is a block hash, together with some metadata such as round number
and the type of vote, such as prevote or precommit, all signed with a voter''s
private key.
Rounds: Each participant has their own idea of what is the current round number.
Every prevote and precommit has an associated round number. Honest voters only
vote once (for each type of vote) in each round and do not vote in earlier rounds
after later ones. Participants need to keep track of which block they see as currently
being the latest finalised block and an estimate of which block could have been
finalised in the last round.
For block $B$, we write chain $(B)$ for the chain whose head is $B$. The block
number, $n(B)$ of a block $B$ is the length of chain $(B)$. For blocks $B^{\prime}$
and $B$, we say $B$ is later than $B^{\prime}$ if it has a higher block number.
We write $B>B^{\prime}$ or that $B$ is descendant of $B^{\prime}$ for $B, B^{\prime}$
appearing in the same blockchain with $B^{\prime}$ later i.e. $B^{\prime} \in$
chain $(B)$ with $n(B)>n\left(B^{\prime}\right) . B \geq B^{\prime}$ and $B \leq
B^{\prime}$ are similar except allowing $B=B^{\prime}$. We write $B \sim B^{\prime}$
or $B$ and $B^{\prime}$ are on the same chain if $B<B^{\prime}, B=B^{\prime}$
or $B>B^{\prime}$; and $B \nsim B^{\prime}$ or $B$ and $B^{\prime}$ are not on
the same chain if there is no such chain.
Blocks are ordered as a tree with the genesis block as root. So any two blocks
have a common ancestor but two blocks not on the same chain do not have a common
descendant. A vote $v$ for a block $B$ by a voter $V$ is a message signed by $V$
containing the blockhash of $B$ and meta-information like the round numbers and
the type of vote.
A voter equivocates in a set of votes $S$ if they have cast multiple different
votes in $S$. We call a set $S$ of votes safe if the number of voters who equivocate
in $S$ is at most $f$. We say that $S$ has a supermajority for a block $B$ if
the set of voters who either have a vote for blocks $\geq B$ or equivocate in
$S$ has size at least $(n+f+1) / 2$. We count equivocations as votes for everything
so that observing a vote is monotonic, meaning that if $S \subset T$ then if $S$
has a supermajority for $B$ so does $T$, while being able to ignore yet more equivocating
votes from an equivocating voter.
For our finality gadget (GRANDPA) we use the ghost [13] eventual consensus algorithm
as $F$. The 2/3-GHOST function $g(S)$ takes a set $S$ of votes and returns the
block $B$ with highest block number such that $S$ has a supermajority for $B$.
If there is no such block, then it returns ''nil''. Note that, if $S$ is safe,
then we can compute $g(S)$ by starting at the genesis block and iteratively looking
for a child of our current block with a supermajority, which must be unique if
it exists. Thus we have:
Lemma 2.5. Let $T$ be a safe set of votes. Then'
- 'paper-title: Zexe: Enabling Decentralized Private Computation
In sum, proofs of predicates'' satisfiability are produced via a SNARK over $E_{\text
{BLS }}$, and proofs for the NP relation $\mathcal{R}_{\mathrm{e}}$ are produced
via a zkSNARK over $E_{\mathrm{CP}}$. The matching fields between the two curves
ensure that the former proofs can be efficiently verified.
Problem 3: Cocks-Pinch curves are costly. While the curve $E_{\mathrm{CP}}$ was
chosen to facilitate efficient checking of proofs over $E_{\mathrm{BLS}}$, the
curve $E_{\mathrm{CP}}$ is at least $2 \times$ more expensive (in time and space)
than $E_{\mathrm{BLS}}$ simply because $E_{\mathrm{CP}}$ ''s base field has about
twice as many bits as $E_{\mathrm{BLS}}$ ''s base field. Checks in the NP relation
$\mathcal{R}_{\mathrm{e}}$\\
that are not directly related to proof checking are now unnecessarily carried
over a less efficient curve.\\
Solution 3: split relations across two curves. We split $\mathcal{R}_{\mathrm{e}}$
into two NP relations $\mathcal{R}_{\mathrm{BLS}}$ and $\mathcal{R}_{\mathrm{CP}}$
(see Fig. 14), with the latter containing just the proof check and the former
containing all other checks. We can then use a zkSNARK over the curve $E_{\text
{BLS }}$ (an efficient curve) to produce proofs for $\mathcal{R}_{\mathrm{BLS}}$,
and a zkSNARK over $E_{\mathrm{CP}}$ (the less efficient curve) to produce proofs
for $\mathcal{R}_{\mathrm{CP}}$. This approach significantly reduces the running
time of DPC.Execute (producing proofs for the checks in $\mathcal{R}_{\mathrm{BLS}}$
is more efficient over $E_{\mathrm{BLS}}$ than over $E_{\mathrm{CP}}$ ), at the
expense of a modest increase in transaction size (a transaction now includes a
zkSNARK proof over $E_{\mathrm{BLS}}$ in addition to a proof over $E_{\mathrm{CP}}$
). An important technicality that must be addressed is that the foregoing split
relies on certain secret information to be shared across the NP relations, namely,
the identities of relevant predicates and the local data. We can store this information
in suitable commitments that are part of the NP instances for the two NP relations
(doing this efficiently requires some care as we discuss below).'
- 'paper-title: Ouroboros Praos: An adaptively-secure, semi-synchronous proof-of-stake
blockchain
where $\alpha_{\mathcal{H}}$ denotes the total relative stake of the honest parties.
Note that this bound applies to all static adversaries $\mathcal{A}$ that corrupt
no more than a $1-\alpha_{\mathcal{H}}$ fraction of all stake. With this in mind,
we define the dominant distribution as follows.\\
Definition 13 (The dominant distribution $\mathcal{D}_{\alpha}^{f}$ ). For two
parameters $f$ and $\alpha$, define $\mathcal{D}_{\alpha}^{f}$ to be the distribution
on strings $w \in\{0,1, \perp\}^{R}$ that independently assigns each $w_{i}$ so
that
\begin{align*}
p_{\perp} \triangleq \operatorname{Pr}\left[w_{i}\right. & =\perp]=1-f, \\
p_{0} \triangleq \operatorname{Pr}\left[w_{i}\right. & =0]=\phi(\alpha) \cdot(1-f),
\quad \text { and } \tag{9}\\
p_{1} \triangleq \operatorname{Pr}\left[w_{i}\right. & =1]=1-p_{\perp}-p_{0} .
\end{align*}
The distribution $\mathcal{D}_{\alpha}^{f}$ "dominates" $\mathcal{D}_{\mathcal{Z},
\mathcal{A}}^{f}$ for any static adversary $\mathcal{A}$ that corrupts no more
than a relative $1-\alpha$ share of the total stake, in the sense that nonempty
slots are more likely to be tainted under $\mathcal{D}_{\alpha}^{f}$ than they
are under $\mathcal{D}_{\mathcal{Z}, \mathcal{A}}^{f}$.
To make this relationship precise, we introduce the partial order $\preceq$ on
the set $\{\perp, 0,1\}$ so that $x \preceq y$ if and only if $x=y$ or $y=1$.
We extend this partial order to $\{\perp, 0,1\}^{R}$ by declaring $x_{1} \ldots
x_{R} \preceq y_{1} \ldots y_{R}$ if and only if $x_{i} \preceq y_{i}$ for each
$i$. Intuitively, the relationship $x \prec y$ asserts that $y$ is "more adversarial
than" $x$; concretely, any legal fork for $x$ is also a legal fork for $y$. Finally,
we define a notion of stochastic dominance for distributions on characteristic
strings, and $\alpha$-dominated adversaries.
Definition 14 (Stochastic dominance). We say that a subset $E \subseteq\{\perp,
0,1\}^{R}$ is monotone if $x \in E$ and $x \preceq y$ implies that $y \in E$.
Let $\mathcal{D}$ and $\mathcal{D}^{\prime}$ be two distributions on the set of
characteristic strings $\{\perp, 0,1\}^{R}$. Then we say that $\mathcal{D}^{\prime}$
dominates $\mathcal{D}$, written $\mathcal{D} \preceq \mathcal{D}^{\prime}$, if
$\operatorname{Pr}{ }_{\mathcal{D}}[E] \leq \operatorname{Pr}_{\mathcal{D}^{\prime}}[E]$
for every monotone set $E$. An adversary $\mathcal{A}$ is called $\alpha$-dominated
if the distribution $\mathcal{D}_{\mathcal{Z}, \mathcal{A}}^{f}$ that it induces
on the set of characteristic strings satisfies $\mathcal{D}_{\mathcal{Z}, \mathcal{A}}^{f}
\preceq \mathcal{D}_{\alpha}^{f}$.
As noted above, this notion of stochastic dominance is consistent with the chain-theoretic
definitions of interest, in the sense that failures of the abstract chain properties
form monotone events. We record this in the lemma below.'
- source_sentence: What does the paper conclude about the relationship between latency
and security in the Nakamoto Consensus protocol?
sentences:
- 'paper-title: Close Latency-Security Trade-off for the Nakamoto Consensus
Evidently, if the infinite sums in (2) and (10) are replaced by partial sums for
numerical evaluation, the resulting (tighter) security level remains unachievable.
\subsection*{3.1 Remarks}
Theorems 3.5 and 3.6 assume the delay $\Delta>0$. The bounds therein still apply
if we set $\Delta=0$, but are slightly looser than the bounds in Theorems 3.3
and 3.4 for the zero-delay case.
It is important to include the time of interest $s$ in Definitions 3.1 and 3.2.
The "bad events" for security breach depend on $s$ as well as the latency $t$.
These well-defined events are concerned with block mining times, not how blocks
form blockchains. ${ }^{3}$
We note that a number of previous analyses on the Nakamoto consensus assume a
finite lifespan of the protocol [1, 10], that is, a maximum round number is defined,
at which round the protocol terminates. The probability of consistency depends
on the maximum round number. In contrast, this paper does not assume a finite
lifespan. Theorem 3.5 states that, barring a small probability event, confirmed
blocks remain permanently in all miners'' longest blockchains into the arbitrary
future.
Even though we provide the same security guarantee for every blockchain after
the confirmation latency $t$, no one can simultaneously guarantee the same for
all blocks that will ever be confirmed.
\footnotetext{${ }^{3}$ To be rigorous, we do not make claims such as "the blockchain/protocol/system
satisfies consistency or liveness properties with probability ..." because those
properties themselves are not events in the probability space defined here.
}
\includegraphics[max width=\textwidth, center]{2025_01_02_447c9a776bd74bcc1f99g-04}
Figure 1: Bitcoin''s latency-security trade-off with $\alpha+\beta=$ $1 / 600$
blocks per second and $\Delta=10$ seconds.
This is a simple consequence of Murphy''s Law: If an adversary keeps trying new
episodes of attacks, with probability 1 a bad event will eventually occur to revert
some confirmed honest blocks.
For technical convenience, we regard a block in a miner''s longest blockchain
to be confirmed after a certain amount of time elapses since the block is mined
or enters the miner''s view. Nakamoto [22] originally proposed confirming a block
after it is sufficiently deep in an honest miner''s longest blockchain. We believe
both confirmation rules are easy to use in practice. And the two confirmation
rules imply each other in probability (see Appendix A for further discussion).
\subsection*{3.2 Numerical Examples}
The latency-security trade-off under several different sets of parameters is plotted
in Figure 1. The mining rate is set to Bitcoin''s one block per 600 seconds, or
$\alpha+\beta=1 / 600$ blocks/second. The propagation delay bound is assumed to
be $\Delta=10$ seconds. The latency upper and lower bounds are computed using
Theorems 3.5 and 3.6, respectively. In Figure 1, all bounds appear to be exponential
for all but very small latency and high error probabilities. This implies the
exponential bound (7) is a good approximation of (5) in Theorem 3.5 for the typical
range of parameters of interest here.
It is instructive to examine concrete data points in Figure 1: If the adversarial
share of the total network mining rate is $10 \%$ $(\alpha: \beta=9: 1)$, then
a confirmation time of four hours is sufficient to achieve $10^{-3}$ security
level, and a ten-hour confirmation achieves $10^{-9}$ security level. These results
are about two hours away from the corresponding lower bounds. Also, for every
additional hour of latency, the security improves by a factor of approximately
20 . If the adversarial share of the mining rate increases to $25 \%(\alpha: \beta=3:
1)$, then 10 hours 40 minutes and 28 hours 45 minutes of confirmation times achieve
$10^{-3}$ and $10^{-9}$ security levels, respectively, and the gap between the
upper and lower bounds is between five and seven hours. In general, the gap is
proportionally insignificant at high security levels but can be otherwise at low
security levels. For given mining rates, the gaps are similar at different security
levels. This indicates the lower bound (10) is also approximately exponential
with a slightly steeper exponent than that of the upper bound.'
- "paper-title: Ledger Combiners for Fast Settlement\n\n$$\n\\begin{aligned}\n\\\
delta\\left(\\operatorname{PoW}_{p}^{m}(x), \\mathrm{IPoW}_{p / m}^{m}(x)\\right)\
\ & =\\frac{1}{2} \\sum_{s \\in\\{0,1\\}^{m}}\\left|\\operatorname{Pr}\\left[\\\
operatorname{PoW}_{p}^{m}(x)=s\\right]-\\operatorname{Pr}\\left[\\operatorname{IPoW}_{p\
\ / m}^{m}(x)=s\\right]\\right| \\\\\n& =\\sum_{\\substack{s \\in\\{0,1)^{m} \\\
\\\n\\mathrm{hw}(s)=1}}\\left(\\operatorname{Pr}\\left[\\operatorname{PoW}_{p}^{m}(x)=s\\\
right]-\\operatorname{Pr}\\left[\\operatorname{IPoW}_{p / m}^{m}(x)=s\\right]\\\
right) \\\\\n& \\leq m \\cdot\\left[\\frac{p}{m}-\\frac{p}{m}\\left(1-\\frac{p}{m}\\\
right)^{m-1}\\right] \\leq p[1-(1-p)]=p^{2}\n\\end{aligned}\n$$\n\nas desired,\
\ where the last inequality follows by Bernoulli inequality.\n\nThe above lemma\
\ already justifies the use of $\\mathrm{PoW}_{p}^{m}$ for achieving subindependence\
\ in practical scenarios. To observe this, note that the use of $\\mathrm{IPoW}_{p\
\ / m}^{m}$ would lead to full independence of the individual PoW lotteries, and\
\ by Lemma 7 the real execution with $\\mathrm{PoW}_{p}^{m}$ will only differ\
\ from this ideal behavior with probability at most $Q \\cdot p^{2}$, where $Q$\
\ is the total number of PoW-queries. With current values of $p \\approx 10^{-22}$\
\ in e.g., Bitcoin ${ }^{2}$, and the block creation time adjusting to 10 minutes,\
\ this difference would manifest on expectation in about $10^{18}$ years. Note\
\ that any future increase of the total mining difficulty while maintaining the\
\ block creation time would only increase this period.\n\nNonetheless, in Appendix\
\ F we give a more detailed analysis of $\\mathrm{PoW}_{p}^{m}$ that shows that,\
\ loosely speaking, $m$ parallel executions of Bitcoin using PoW ${ }_{p}^{m}$\
\ as their shared PoW oracle achieve $\\varepsilon$-subindependence for $\\varepsilon$\
\ negligible in the security parameter.\n\n\\subsection*{4.2 Realizing Rank via\
\ Timestamped Blockchains}\nAn important consideration when deploying our virtual\
\ ledger construction over existing blockchains is how to realize the notion of\
\ rank. We note that typical Nakamoto-style PoS blockchains (e.g., the Ouroboros\
\ family, Snow White) assume a common notion of time among the participants and\
\ explicitly label blocks with slot numbers with a direct correspondence to absolute\
\ time. These slot numbers (or, preferably, a notion of common time associated\
\ with each slot number) directly afford a notion of rank that provides the desired\
\ persistence and liveness guarantees. To formalize this property, we introduce\
\ the notion of a timestamped blockchain.\n\nDefinition 11. A timestamped blockchain\
\ is one satisfying the following conventions:\n\n\\begin{itemize}\n \\item Block\
\ timestamps. Every block contains a declared timestamp.\n \\item Monotonicity.\
\ In order for a block to be considered valid, its timestamp can be no less than\
\ the timestamps of all prior blocks in the blockchain. (Thus valid blockchains\
\ consist of blocks in monotonically increasing order.)\n\\end{itemize}\n\nInformally,\
\ we say that an algorithm is a timestamped blockchain algorithm if it calls for\
\ participants to broadcast timestamped blockchains and to \"respect timestamps.\"\
\ More specifically, the algorithm satisfies the following:\n\n\\begin{itemize}\n\
\ \\item Faithful honest timestamping. Honest participants always post blocks\
\ with timestamps determined by their local clocks.\n \\item Ignore future blocks.\
\ Honest participants ignore blocks that contain a timestamp which is greater\
\ than their local time by more than a fixed constant. (These blocks might be\
\ considered later when the local clock of the participant \"catches up\" with\
\ the timestamp.)\n\\end{itemize}"
- "paper-title: A Scalable Proof-of-Stake Blockchain in the Open Setting * \\\\\
\ (or, How to Mimic Nakamoto's Design via Proof-of-Stake)\n\nLet $\\ell$ be the\
\ length of core-chain $\\mathcal{C}$. In our design, only the elected PoS-players\
\ are allowed to generate new block-cores (to extend the core-chain). Now, each\
\ registered PoS-player P will work on the right \"context\" which consists of\
\ the latest block-core in the longest corechain and the current time; formally\
\ context $:=\\left\\langle h^{\\text {prev }}\\right.$, round $\\rangle$ where\
\ $\\mathcal{C}[\\ell]$ is the latest blockcore in the longest core-chain $\\\
mathcal{C}$, and $h^{\\text {prev }}$ is the identity returned by the functionality\
\ $\\mathcal{F}_{\\text {rCERT }}$ for $\\mathcal{C}[\\ell]$, and round denotes\
\ the current time. The PoS-player P may query $\\mathcal{F}_{\\text {rCERT }}$\
\ by command (Elect, P , context, $\\mathcal{C}$ ) to see if he is selected to\
\ extend $\\mathcal{C}$. If the PoS-player P is selected (with certain probability\
\ $p$ ), he would receive a message (Elected, $\\mathrm{P}, h, \\sigma, \\mathrm{~b}$\
\ ) from $\\mathcal{F}_{\\text {rCERT }}$ such that $\\mathrm{b}=1$. Once receiving\
\ the signature $\\sigma$ from the functionality, the PoS-player P defines a new\
\ block-core $B:=\\left\\langle\\left\\langle h^{\\text {prev }}, h\\right.\\\
right.$, round $\\left.\\rangle, \\mathrm{P}, \\sigma\\right\\rangle$, updates\
\ his local core-chain $\\mathcal{C}$ and then broadcasts the local core-chain\
\ to the network. Please refer to Figure 3 for more details of our core-chain\
\ protocol.\n\nNote that here PoS-players have access to the functionality $\\\
mathcal{F}_{\\text {rCERT }}$. The players need to register to the functionality\
\ $\\mathcal{F}_{\\text {rCERT }}$ before querying the functionality.\n\nThe best\
\ core-chain strategy. Our proof-of-stake core-chain protocol $\\Pi^{\\text {core\
\ }}$ uses the subroutine BestCore to single out the best valid core-chain from\
\ a set of core-chains. Now we describe the rules of selecting the best core-chain.\
\ Roughly speaking, a core-chain is the best one if it is the current longest\
\ valid core-chain. The BestCore subroutine takes as input, a core-chain set $\\\
mathbb{C}^{\\prime}$ and the current time information round'. Intuitively, the\
\ subroutine validates all $\\mathcal{C} \\in \\mathbb{C}^{\\prime}$, then finds\
\ the valid longest core-chain.\n\nIn more detail, BestCore proceeds as follows.\
\ On input the current set of core-chains $\\mathbb{C}^{\\prime}$ and the current\
\ time information round', and for each core-chain $\\mathcal{C}$, the subroutine\
\ then evaluates every block-core of the core-chain $\\mathcal{C}$ sequentially.\
\ Let $\\ell$ be the length of $\\mathcal{C}$. Starting from the head of $\\mathcal{C}$,\
\ for every block-core $\\mathcal{C}[i]$, for all $i \\in[\\ell]$, in the core-chain\
\ $\\mathcal{C}$, the BestCore subroutine (1) ensures that $\\mathcal{C}[i]$ is\
\ linked to the previous block-core $\\mathcal{C}[i-1]$ correctly, and (2) tests\
\ if the\n\n\\section*{Protocol $\\Pi^{\\text {core }}$}\nInitially, a set $\\\
mathcal{P}_{0}$ of players are registered to the functionality $\\mathcal{F}_{\\\
text {rCERT }}$, where $\\mathcal{P}_{0} \\subseteq \\mathcal{P}$. Initially,\
\ for each $\\mathrm{P} \\in \\mathcal{P}$, set $\\mathcal{C}:=\\emptyset$, and\
\ state $:=\\emptyset$.\n\nUpon receiving message (Input-Stake, P ) from the environment\
\ $z$ at round round, the PoS-player $\\mathrm{P} \\in$ $\\mathcal{P}$, with local\
\ state state, proceeds as follows.\n\n\\begin{enumerate}\n \\item Select the\
\ best local PoS core-chain:\n\\end{enumerate}"
- source_sentence: What is the difference between absolute settlement and relative
settlement for transactions in a ledger?
sentences:
- 'paper-title: Ledger Combiners for Fast Settlement
Since the above requirements are formulated independently for each $t$, it is
well-defined to treat $\mathrm{C}[\cdot]$ as operating on ledgers rather than
dynamic ledgers; we sometimes overload the notation in this sense.
Looking ahead, our amplification combiner will consider $\mathrm{t}_{\mathrm{C}}\left(\mathbf{L}_{1}^{(t)},
\ldots, \mathbf{L}_{m}^{(t)}\right)=\bigcup_{i} \mathbf{L}_{i}^{(t)}$ along with
two related definitions of $\mathrm{a}_{\mathrm{C}}$ :
$$
\mathrm{a}_{\mathrm{C}}\left(A_{1}^{(t)}, \ldots, A_{m}^{(t)}\right)=\bigcup_{i}
A_{i}^{(t)} \quad \text { and } \quad \mathrm{a}_{\mathrm{C}}\left(A_{1}^{(t)},
\ldots, A_{m}^{(t)}\right)=\bigcap_{i} A_{i}^{(t)}
$$
see Section 3. The robust combiner will adopt a more sophisticated notion of $t_{c}$;
see Section 5 . In each of these cases, the important structural properties of
the construction are captured by the rank function $r_{C}$.
\subsection*{2.3 Transaction Validity and Settlement}
In the discussion below, we assume a general notion of transaction validity that
can be decided inductively: given a ledger $\mathbf{L}$, the validity of a transaction
$t x \in \mathbf{L}$ is determined by the transactions in the state $\mathbf{L}\lceil\operatorname{tx}\rceil$
of $\mathbf{L}$ up to tx and their ordering. Intuitively, only valid transactions
are then accounted for when interpreting the state of the ledger on the application
level. The canonical example of such a validity predicate in the case of so-called
UTXO transactions is formalized for completeness in Appendix B. Note that protocols
such as Bitcoin allow only valid transactions to enter the ledger; as the Bitcoin
ledger is represented by a simple chain it is possible to evaluate the validity
predicate upon block creation for each included transaction. This may not be the
case for more general ledgers, such as the result of applying one of our combiners
or various DAG-based constructions.
While we focus our analysis on persistence and liveness as given in Definition
3, our broader goal is to study settlement. Intuitively, settlement is the delay
necessary to ensure that a transaction included in some $A^{(t)}$ enters the dynamic
ledger and, furthermore, that its validity stabilizes for all future times.
Definition 5 (Absolute settlement). For a dynamic ledger $\mathbf{D} \stackrel{\text
{ def }}{=} \mathbf{L}^{(0)}, \mathbf{L}^{(1)}, \ldots$ we say that a transaction
$t x \in$ $A^{(\tau)} \cap \mathbf{L}^{(t)}($ for $\tau \leq t)$ is (absolutely)
settled at time $t$ iffor all $\ell \geq t$ we have: (i) $\mathbf{L}^{(t)}\lceil\mathrm{tx}\rceil
\subseteq \mathbf{L}^{(\ell)}$, (ii) the linear orders $<_{\mathbf{L}^{(t)}}$
and $<_{\mathbf{L}^{(t)}}$ agree on $\mathbf{L}^{(t)}\lceil\mathrm{tx}\rceil$,
and (iii) for any $\mathrm{tx}^{\prime} \in \mathbf{L}^{(e)}$ such that $\mathrm{tx}^{\prime}{<_{\mathbf{L}}(t)}
\mathrm{tx}$ we have $\mathrm{tx}^{\prime} \in \mathbf{L}^{(t)}\lceil\mathrm{tx}\rceil$.
Note that for any absolutely settled transaction, its validity is determined and
it is guaranteed to remain unchanged in the future.
It will be useful to also consider a weaker notion of relative settlement of a
transaction: Intuitively, tx is relatively settled at time $t$ if we have the
guarantee that no (conflicting) transaction $\mathrm{tx}^{\prime}$ that is not
part of the ledger at time $t$ can possibly eventually precede $t x$ in the ledger
ordering.'
- "paper-title: Casper the Friendly Finality Gadget\n\n\\documentclass[10pt]{article}\n\
\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{amsmath}\n\
\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage[version=4]{mhchem}\n\
\\usepackage{stmaryrd}\n\\usepackage{graphicx}\n\\usepackage[export]{adjustbox}\n\
\\graphicspath{ {./images/} }\n\\usepackage{hyperref}\n\\hypersetup{colorlinks=true,\
\ linkcolor=blue, filecolor=magenta, urlcolor=cyan,}\n\\urlstyle{same}\n\n\\title{Casper\
\ the Friendly Finality Gadget }\n\n\\author{Vitalik Buterin and Virgil Griffith\\\
\\\nEthereum Foundation}\n\\date{}\n\n\n%New command to display footnote whose\
\ markers will always be hidden\n\\let\\svthefootnote\\thefootnote\n\\newcommand\\\
blfootnotetext[1]{%\n \\let\\thefootnote\\relax\\footnote{#1}%\n \\addtocounter{footnote}{-1}%\n\
\ \\let\\thefootnote\\svthefootnote%\n}\n\n%Overriding the \\footnotetext command\
\ to hide the marker if its value is `0`\n\\let\\svfootnotetext\\footnotetext\n\
\\renewcommand\\footnotetext[2][?]{%\n \\if\\relax#1\\relax%\n \\ifnum\\value{footnote}=0\\\
blfootnotetext{#2}\\else\\svfootnotetext{#2}\\fi%\n \\else%\n \\if?#1\\ifnum\\\
value{footnote}=0\\blfootnotetext{#2}\\else\\svfootnotetext{#2}\\fi%\n \\else\\\
svfootnotetext[#1]{#2}\\fi%\n \\fi\n}\n\n\\begin{document}\n\\maketitle\n\n\n\
\\begin{abstract}\nWe introduce Casper, a proof of stake-based finality system\
\ which overlays an existing proof of work blockchain. Casper is a partial consensus\
\ mechanism combining proof of stake algorithm research and Byzantine fault tolerant\
\ consensus theory. We introduce our system, prove some desirable features, and\
\ show defenses against long range revisions and catastrophic crashes. The Casper\
\ overlay provides almost any proof of work chain with additional protections\
\ against block reversions.\n\\end{abstract}\n\n\\section*{1. Introduction}\n\
Over the past few years there has been considerable research into \"proof of stake\"\
\ (PoS) based blockchain consensus algorithms. In a PoS system, a blockchain appends\
\ and agrees on new blocks through a process where anyone who holds coins inside\
\ of the system can participate, and the influence an agent has is proportional\
\ to the number of coins (or \"stake\") it holds. This is a vastly more efficient\
\ alternative to proof of work (PoW) \"mining\" and enables blockchains to operate\
\ without mining's high hardware and electricity costs.\\\\[0pt]\nThere are two\
\ major schools of thought in PoS design. The first, chain-based proof of stake[1,\
\ 2], mimics proof of work mechanics and features a chain of blocks and simulates\
\ mining by pseudorandomly assigning the right to create new blocks to stakeholders.\
\ This includes Peercoin[3], Blackcoin[4], and Iddo Bentov's work[5].\\\\[0pt]\n\
The other school, Byzantine fault tolerant (BFT) based proof of stake, is based\
\ on a thirty-year-old body of research into BFT consensus algorithms such as\
\ PBFT[6]. BFT algorithms typically have proven mathematical properties; for example,\
\ one can usually mathematically prove that as long as $>\\frac{2}{3}$ of protocol\
\ participants are following the protocol honestly, then, regardless of network\
\ latency, the algorithm cannot finalize conflicting blocks. Repurposing BFT algorithms\
\ for proof of stake was first introduced by Tendermint[7], and has modern inspirations\
\ such as [8]. Casper follows this BFT tradition, though with some modifications.\n\
\n\\subsection*{1.1. Our Work}\nCasper the Friendly Finality Gadget is an overlay\
\ atop a proposal mechanism-a mechanism which proposes blocks ${ }^{1}$. Casper\
\ is responsible for finalizing these blocks, essentially selecting a unique chain\
\ which represents the canonical transactions of the ledger. Casper provides safety,\
\ but liveness depends on the chosen proposal mechanism. That is, if attackers\
\ wholly control the proposal mechanism, Casper protects against finalizing two\
\ conflicting checkpoints, but the attackers could prevent Casper from finalizing\
\ any future checkpoints.\\\\\nCasper introduces several new features that BFT\
\ algorithms do not necessarily support:"
- 'paper-title: Bitcoin and Cryptocurrency Technologies
Interestingly, these concerns have an analogy in the realm of voting. It''s illegal
in the United States and many other nations for individuals to sell their vote.
Arguably participating in a pool controlled by someone else is akin to selling
your vote in the Bitcoin consensus protocol.
Technical requirements for pools. Recall that mining pools appear to be an emergent
phenomenon. There''s no evidence that Satoshi was thinking of mining pools at
the time of Bitcoin''s original design. It wasn''t apparent for a few years that
efficient pools could be run between many individuals who don''t know or trust
each other.
As we saw in Chapter 5, mining pools typically work by designating a pool operator
with a well-known public key. Each of the participating miners mines as usual
but sends in shares to the pool operator. These shares are "near misses" or "partial
solutions" which would be valid solutions at a lower difficulty level. This shows
the pool operator how much work the miner is performing. Whenever one of the pool
participants finds a valid block, the pool operator then distributes the rewards
amongst the pool participants based on the number of shares they have submitted.
As we discussed in Chapter 5, there are many formulas for dividing the revenue
up, but all mining pools follow this basic structure.
The existence of pools thus relies on at least two technical properties of Bitcoin.
The first is that it''s easy for a miner to prove (probabilistically) how much
work they are doing by submitting shares. By choosing a low enough threshold for
shares, miners can easily prove how much work they are performing with arbitrary
precision regardless of the actual difficulty of finding an valid block. This
facet of mining puzzles appears difficult to change, given that we need a puzzle
that can be created with arbitrary difficulty.
Second, pool members can easily prove to the pool operator that they''re following
the rules and working to find valid blocks which would reward the pool as a whole.
This works because the pool''s public key is committed to in the coinbase transaction
included in the block''s Merkle tree of transactions. Once a miner finds a block
or even a share, they can''t change which public key is the recipient of the newly
minted coins.
Block discarding attacks. There is one weakness in this scheme for implementing
mining pools: there is nothing to to enforce that participating miners actually
submit valid blocks to the pool manager in the event that they find them. Suppose
that there''s a pool member that''s upset with a large mining pool. They can participate
in the pool by mining and submitting shares just like normal, but in the event
that they actually find a valid block that would reward the pool they simply discard
it and don''t tell the pool operator about it.
This attack reduces the pool''s overall mining power as none of the attacker''s
work is contributing towards finding valid blocks. However the attacker will still
be rewarded as they appear to be submitting valid shares and simply getting unlucky
to not find any valid blocks. If the mining pool is designed to be revenue-neutral
(that is, all mining rewards are redistributed back to participants) then this
attack can cause the pool to run at a loss.
This attack is sometimes called a vigilante or sabotage attack and is considered
a form of vandalism because the attack appears to be costly for both the attacker
and the pool. The attacker loses money because every block they discard would
have led to some proportion of the block rewards being returned to them. Of course,
the attacker still gets rewards for other puzzle solutions that are found.
It appears that a rational attacker wouldn''t employ this strategy, since they
would lose money without gaining anything tangible. It turns out (quite surprisingly)
that there are cases where this strategy can be profitable, as discussed in the
box below. But in any case, we want to design an entirely new mining puzzle formulation
that ensures this strategy is always profitable.
Sidebar: block discarding attacks between pools. People assumed for years that
it can''t be profitable for a participant to discard valid blocks found on behalf
of the pool. It turns out this strategy can be profitable if one mining pool uses
it to attack another. This was proposed apocryphally many times and first thoroughly
analyzed in a paper by Ittay Eyal in 2015.
Let''s consider a simple case: suppose two mining pools, $A$ and $B$, each have
$50 \%$ of the total mining capacity. Now suppose B uses half of its mining power
( $25 \%$ of the total capacity) to mine as a member in pool A, but discards all
blocks found. We can show, in a simplified model, that B will now earns $5 / 9$
of the total rewards, greater than the $50 \%$ it would earn by mining normally.
In this simple case, dedicating half of its mining power to attacking can be shown
to be the optimal strategy for pool B.'
model-index:
- name: SentenceTransformer based on BAAI/bge-base-en-v1.5
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.5
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7857142857142857
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8571428571428571
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8571428571428571
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.26190476190476186
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17142857142857146
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08571428571428573
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7857142857142857
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8571428571428571
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8571428571428571
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7032219246239031
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6511904761904762
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6553083095766022
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.5714285714285714
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7857142857142857
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8214285714285714
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8571428571428571
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5714285714285714
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.26190476190476186
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1642857142857143
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08571428571428573
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5714285714285714
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7857142857142857
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8214285714285714
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8571428571428571
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7276726753008987
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6848639455782314
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6886316064887493
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.5714285714285714
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7857142857142857
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8214285714285714
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8571428571428571
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5714285714285714
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.26190476190476186
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1642857142857143
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08571428571428573
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5714285714285714
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7857142857142857
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8214285714285714
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8571428571428571
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7284895986499949
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6857142857142858
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6893267651888342
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.5
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.75
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8214285714285714
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8571428571428571
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.24999999999999997
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1642857142857143
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08571428571428573
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.5
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.75
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8214285714285714
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8571428571428571
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6935204558400861
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6395833333333334
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6425405844155845
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.42857142857142855
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6785714285714286
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.75
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8214285714285714
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.42857142857142855
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.22619047619047614
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.15000000000000005
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08214285714285716
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.42857142857142855
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.6785714285714286
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.75
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8214285714285714
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.631592589549331
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5696428571428572
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5757306413556414
name: Cosine Map@100
---
# SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("mahsaBa76/bge-base-custom-matryoshka")
# Run inference
sentences = [
'What is the difference between absolute settlement and relative settlement for transactions in a ledger?',
'paper-title: Ledger Combiners for Fast Settlement\n\nSince the above requirements are formulated independently for each $t$, it is well-defined to treat $\\mathrm{C}[\\cdot]$ as operating on ledgers rather than dynamic ledgers; we sometimes overload the notation in this sense.\n\nLooking ahead, our amplification combiner will consider $\\mathrm{t}_{\\mathrm{C}}\\left(\\mathbf{L}_{1}^{(t)}, \\ldots, \\mathbf{L}_{m}^{(t)}\\right)=\\bigcup_{i} \\mathbf{L}_{i}^{(t)}$ along with two related definitions of $\\mathrm{a}_{\\mathrm{C}}$ :\n\n$$\n\\mathrm{a}_{\\mathrm{C}}\\left(A_{1}^{(t)}, \\ldots, A_{m}^{(t)}\\right)=\\bigcup_{i} A_{i}^{(t)} \\quad \\text { and } \\quad \\mathrm{a}_{\\mathrm{C}}\\left(A_{1}^{(t)}, \\ldots, A_{m}^{(t)}\\right)=\\bigcap_{i} A_{i}^{(t)}\n$$\n\nsee Section 3. The robust combiner will adopt a more sophisticated notion of $t_{c}$; see Section 5 . In each of these cases, the important structural properties of the construction are captured by the rank function $r_{C}$.\n\n\\subsection*{2.3 Transaction Validity and Settlement}\nIn the discussion below, we assume a general notion of transaction validity that can be decided inductively: given a ledger $\\mathbf{L}$, the validity of a transaction $t x \\in \\mathbf{L}$ is determined by the transactions in the state $\\mathbf{L}\\lceil\\operatorname{tx}\\rceil$ of $\\mathbf{L}$ up to tx and their ordering. Intuitively, only valid transactions are then accounted for when interpreting the state of the ledger on the application level. The canonical example of such a validity predicate in the case of so-called UTXO transactions is formalized for completeness in Appendix B. Note that protocols such as Bitcoin allow only valid transactions to enter the ledger; as the Bitcoin ledger is represented by a simple chain it is possible to evaluate the validity predicate upon block creation for each included transaction. This may not be the case for more general ledgers, such as the result of applying one of our combiners or various DAG-based constructions.\n\nWhile we focus our analysis on persistence and liveness as given in Definition 3, our broader goal is to study settlement. Intuitively, settlement is the delay necessary to ensure that a transaction included in some $A^{(t)}$ enters the dynamic ledger and, furthermore, that its validity stabilizes for all future times.\n\nDefinition 5 (Absolute settlement). For a dynamic ledger $\\mathbf{D} \\stackrel{\\text { def }}{=} \\mathbf{L}^{(0)}, \\mathbf{L}^{(1)}, \\ldots$ we say that a transaction $t x \\in$ $A^{(\\tau)} \\cap \\mathbf{L}^{(t)}($ for $\\tau \\leq t)$ is (absolutely) settled at time $t$ iffor all $\\ell \\geq t$ we have: (i) $\\mathbf{L}^{(t)}\\lceil\\mathrm{tx}\\rceil \\subseteq \\mathbf{L}^{(\\ell)}$, (ii) the linear orders $<_{\\mathbf{L}^{(t)}}$ and $<_{\\mathbf{L}^{(t)}}$ agree on $\\mathbf{L}^{(t)}\\lceil\\mathrm{tx}\\rceil$, and (iii) for any $\\mathrm{tx}^{\\prime} \\in \\mathbf{L}^{(e)}$ such that $\\mathrm{tx}^{\\prime}{<_{\\mathbf{L}}(t)} \\mathrm{tx}$ we have $\\mathrm{tx}^{\\prime} \\in \\mathbf{L}^{(t)}\\lceil\\mathrm{tx}\\rceil$.\n\nNote that for any absolutely settled transaction, its validity is determined and it is guaranteed to remain unchanged in the future.\n\nIt will be useful to also consider a weaker notion of relative settlement of a transaction: Intuitively, tx is relatively settled at time $t$ if we have the guarantee that no (conflicting) transaction $\\mathrm{tx}^{\\prime}$ that is not part of the ledger at time $t$ can possibly eventually precede $t x$ in the ledger ordering.',
'paper-title: Casper the Friendly Finality Gadget\n\n\\documentclass[10pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\\usepackage{amsmath}\n\\usepackage{amsfonts}\n\\usepackage{amssymb}\n\\usepackage[version=4]{mhchem}\n\\usepackage{stmaryrd}\n\\usepackage{graphicx}\n\\usepackage[export]{adjustbox}\n\\graphicspath{ {./images/} }\n\\usepackage{hyperref}\n\\hypersetup{colorlinks=true, linkcolor=blue, filecolor=magenta, urlcolor=cyan,}\n\\urlstyle{same}\n\n\\title{Casper the Friendly Finality Gadget }\n\n\\author{Vitalik Buterin and Virgil Griffith\\\\\nEthereum Foundation}\n\\date{}\n\n\n%New command to display footnote whose markers will always be hidden\n\\let\\svthefootnote\\thefootnote\n\\newcommand\\blfootnotetext[1]{%\n \\let\\thefootnote\\relax\\footnote{#1}%\n \\addtocounter{footnote}{-1}%\n \\let\\thefootnote\\svthefootnote%\n}\n\n%Overriding the \\footnotetext command to hide the marker if its value is `0`\n\\let\\svfootnotetext\\footnotetext\n\\renewcommand\\footnotetext[2][?]{%\n \\if\\relax#1\\relax%\n \\ifnum\\value{footnote}=0\\blfootnotetext{#2}\\else\\svfootnotetext{#2}\\fi%\n \\else%\n \\if?#1\\ifnum\\value{footnote}=0\\blfootnotetext{#2}\\else\\svfootnotetext{#2}\\fi%\n \\else\\svfootnotetext[#1]{#2}\\fi%\n \\fi\n}\n\n\\begin{document}\n\\maketitle\n\n\n\\begin{abstract}\nWe introduce Casper, a proof of stake-based finality system which overlays an existing proof of work blockchain. Casper is a partial consensus mechanism combining proof of stake algorithm research and Byzantine fault tolerant consensus theory. We introduce our system, prove some desirable features, and show defenses against long range revisions and catastrophic crashes. The Casper overlay provides almost any proof of work chain with additional protections against block reversions.\n\\end{abstract}\n\n\\section*{1. Introduction}\nOver the past few years there has been considerable research into "proof of stake" (PoS) based blockchain consensus algorithms. In a PoS system, a blockchain appends and agrees on new blocks through a process where anyone who holds coins inside of the system can participate, and the influence an agent has is proportional to the number of coins (or "stake") it holds. This is a vastly more efficient alternative to proof of work (PoW) "mining" and enables blockchains to operate without mining\'s high hardware and electricity costs.\\\\[0pt]\nThere are two major schools of thought in PoS design. The first, chain-based proof of stake[1, 2], mimics proof of work mechanics and features a chain of blocks and simulates mining by pseudorandomly assigning the right to create new blocks to stakeholders. This includes Peercoin[3], Blackcoin[4], and Iddo Bentov\'s work[5].\\\\[0pt]\nThe other school, Byzantine fault tolerant (BFT) based proof of stake, is based on a thirty-year-old body of research into BFT consensus algorithms such as PBFT[6]. BFT algorithms typically have proven mathematical properties; for example, one can usually mathematically prove that as long as $>\\frac{2}{3}$ of protocol participants are following the protocol honestly, then, regardless of network latency, the algorithm cannot finalize conflicting blocks. Repurposing BFT algorithms for proof of stake was first introduced by Tendermint[7], and has modern inspirations such as [8]. Casper follows this BFT tradition, though with some modifications.\n\n\\subsection*{1.1. Our Work}\nCasper the Friendly Finality Gadget is an overlay atop a proposal mechanism-a mechanism which proposes blocks ${ }^{1}$. Casper is responsible for finalizing these blocks, essentially selecting a unique chain which represents the canonical transactions of the ledger. Casper provides safety, but liveness depends on the chosen proposal mechanism. That is, if attackers wholly control the proposal mechanism, Casper protects against finalizing two conflicting checkpoints, but the attackers could prevent Casper from finalizing any future checkpoints.\\\\\nCasper introduces several new features that BFT algorithms do not necessarily support:',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.5 | 0.5714 | 0.5714 | 0.5 | 0.4286 |
| cosine_accuracy@3 | 0.7857 | 0.7857 | 0.7857 | 0.75 | 0.6786 |
| cosine_accuracy@5 | 0.8571 | 0.8214 | 0.8214 | 0.8214 | 0.75 |
| cosine_accuracy@10 | 0.8571 | 0.8571 | 0.8571 | 0.8571 | 0.8214 |
| cosine_precision@1 | 0.5 | 0.5714 | 0.5714 | 0.5 | 0.4286 |
| cosine_precision@3 | 0.2619 | 0.2619 | 0.2619 | 0.25 | 0.2262 |
| cosine_precision@5 | 0.1714 | 0.1643 | 0.1643 | 0.1643 | 0.15 |
| cosine_precision@10 | 0.0857 | 0.0857 | 0.0857 | 0.0857 | 0.0821 |
| cosine_recall@1 | 0.5 | 0.5714 | 0.5714 | 0.5 | 0.4286 |
| cosine_recall@3 | 0.7857 | 0.7857 | 0.7857 | 0.75 | 0.6786 |
| cosine_recall@5 | 0.8571 | 0.8214 | 0.8214 | 0.8214 | 0.75 |
| cosine_recall@10 | 0.8571 | 0.8571 | 0.8571 | 0.8571 | 0.8214 |
| **cosine_ndcg@10** | **0.7032** | **0.7277** | **0.7285** | **0.6935** | **0.6316** |
| cosine_mrr@10 | 0.6512 | 0.6849 | 0.6857 | 0.6396 | 0.5696 |
| cosine_map@100 | 0.6553 | 0.6886 | 0.6893 | 0.6425 | 0.5757 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 278 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 278 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 26.06 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>min: 512 tokens</li><li>mean: 512.0 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>How does ByzCoin ensure that microblock chains remain consistent even in the presence of keyblock conflicts?</code> | <code>paper-title: Enhancing Bitcoin Security and Performance with Strong Consistency via Collective Signing<br><br>Figure 3: ByzCoin blockchain: Two parallel chains store information about the leaders (keyblocks) and the transactions (microblocks)\\<br>becomes two separate parallel blockchains, as shown in Fig. 3. The main blockchain is the keyblock chain, consisting of all mined blocks. The microblock chain is a secondary blockchain that depends on the primary to identify the era in which every microblock belongs to, i.e., which miners are authoritative to sign it and who is the leader of the era.<br><br>Microblocks. A microblock is a simple block that the current consensus group produces every few seconds to represent newly-committed transactions. Each microblock includes a set of transactions and a collective signature. Each microblock also includes hashes referring to the previous microblock and keyblock: the former to ensure total ordering, and the latter indicating which consensus group window and l...</code> |
| <code>What are the primary ways in which Bitcoin users can be deanonymized, and why is network-layer deanonymization particularly concerning?</code> | <code>paper-title: Bitcoin and Cryptocurrency Technologies<br><br>This is is exactly what the Fistful of Bitcoins researchers (and others since) have done. They bought a variety of things, joined mining pools, used Bitcoin exchanges, wallet services, and gambling sites, and interacted in a variety of other ways with service providers, compromising 344 transactions in all.<br><br>In Figure 6.5, we again show the clusters of Figure 6.4, but this times with the labels attached. While our guesses about Mt. gox and Satoshi Dice were correct, the researchers were able to identify numerous other service providers that would have been hard to identify without transacting with them.\\<br>\includegraphics[max width=\textwidth, center]{2025_01_02_05ab7f20e06e1a41e145g-175}<br><br>Figure 6.5. Labeled clusters. By transacting with various Bitcoin service providers, Meiklejohn et al. were able to attach real world identities to their clusters.<br><br>Identifying individuals. The next question is: can we do the same thing for indivi...</code> |
| <code>What is the main purpose of the ledger indistinguishability and transaction non-malleability properties in the Zerocash protocol?</code> | <code>paper-title: Zerocash: Decentralized Anonymous Payments from Bitcoin<br><br>Ledger indistinguishability is formalized by an experiment L-IND that proceeds as follows. First, a challenger samples a random bit $b$ and initializes two DAP scheme oracles $\mathcal{O}_{0}^{\text {DAP }}$ and $\mathcal{O}_{1}^{\text {DAP }}$, maintaining ledgers $L_{0}$ and $L_{1}$. Throughout, the challenger allows $\mathcal{A}$ to issue queries to $\mathcal{O}_{0}^{\text {DAP }}$ and $\mathcal{O}_{1}^{\text {DAP }}$, thus controlling the behavior of honest parties on $L_{0}$ and $L_{1}$. The challenger provides the adversary with the view of both ledgers, but in randomized order: $L_{\text {Left }}:=L_{b}$ and $L_{\text {Right }}:=L_{1-b}$. The adversary's goal is to distinguish whether the view he sees corresponds to $\left(L_{\text {Left }}, L_{\text {Right }}\right)=\left(L_{0}, L_{1}\right)$, i.e. $b=0$, or to $\left(L_{\text {Left }}, L_{\text {Right }}\right)=\left(L_{1}, L_{0}\right)$, i.e. $b=1$.<br><br>At eac...</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:-----:|:----:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 1.0 | 1 | 0.6975 | 0.6930 | 0.6760 | 0.6960 | 0.6098 |
| 2.0 | 2 | 0.7258 | 0.7082 | 0.7062 | 0.6935 | 0.6231 |
| 3.0 | 3 | 0.7079 | 0.7270 | 0.7067 | 0.6935 | 0.6184 |
| 4.0 | 4 | 0.7032 | 0.7277 | 0.7285 | 0.6935 | 0.6316 |
### Framework Versions
- Python: 3.10.16
- Sentence Transformers: 3.3.1
- Transformers: 4.41.2
- PyTorch: 2.5.1+cu118
- Accelerate: 1.2.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
RichardErkhov/gplsi_-_Aitana-6.3B-4bits
|
RichardErkhov
| null |
[
"safetensors",
"bloom",
"4-bit",
"bitsandbytes",
"region:us"
] | 1,741,507,199,000 | 2025-03-09T08:01:59 | 2 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Aitana-6.3B - bnb 4bits
- Model creator: https://huggingface.co/gplsi/
- Original model: https://huggingface.co/gplsi/Aitana-6.3B/
Original model description:
---
license: apache-2.0
language:
- ca
- va
tags:
- FLOR
- Bloom
- Aitana
- Catalan
- Valencian
pipeline_tag: text-generation
---
# AITANA-6.3B
<img src="https://cdn-uploads.huggingface.co/production/uploads/639873bb315923c0d5b4c883/6EPbzDJbYtyX_oS15K6jF.png" width="50%" height="50%"/>
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-uses-and-limitations)
- [Demo](#demo)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Evaluation](#evaluation)
- [Additional information](#additional-information)
</details>
## Model description
**AITANA-6.3B** is a text generation model for causal language modeling with a decoder-only architecture.
It has been trained from continuous pre-training based on [FLOR-6.3B](https://huggingface.co/projecte-aina/FLOR-6.3B), with emphasis on data (listed below)
in **Valencian** (similar to Catalan) language. Concretely, a total of 1.304 million tokens per epoch in this first version of the model and two epochs over the data. The **Political and Administrative domains** are highly represented in this model's version.
This model is based on FLOR-6.3B as the basis for training and uses the same tokenizer.
## Intended uses and limitations
As **FLOR-6.3B**, **AITANA-6.3B** is a base model that can be used for causal language modeling, it can be used as is for text generation,
although **fine/instruction-tuning on specific tasks is recommended for its final use**.
This language model has been trained with data in a formal register, namely related to the
administrative and political domain, so it is expected that using it in text-generation tasks
will produce text in this same format.
## Demo
In the following link, you can access an interactive demo to test the text generation in the language model:
Demo link(https://llm-aitana.gplsi.es/)
In the demo, you can adjust the number of words generated as well as the decoding technique to be used by
the model (top p, top k) and other parameters such as temperature.
## How to use
```python
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
input_text = "Les corts valencianes han pres la decisió de"
model_id = "gplsi/Aitana-6.3B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
generator = pipeline(
"text-generation",
model=model_id,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
generation = generator(
input_text,
do_sample=True,
top_k=10,
eos_token_id=tokenizer.eos_token_id,
)
print(f"Result: {generation[0]['generated_text']}")
```
## Training
### Training data
The training corpus has been obtained using web scraping on public data from different sources such as the
[Official Gazette of the University of Alicante (BOUA)](https://www.boua.ua.es/ca), [the Official Gazette of the Generalitat Valenciana (DOGV)](https://dogv.gva.es/va) and accurate data provided by
[the Valencian Courts (DSCV and DSCCV)](https://www.cortsvalencianes.es/ca-va/). Giving a total of 1.304 million tokens, according to the following table.
Dataset | Language | Words (per-epoch) | Epochs | Total Tokens |
|---------------------|----------|--------------------|--------------|--------------|
DSCV | va | 31.98M | 2 | 57.05M |
DSCCV | va | 45.59M | 2 | 80.91M |
BOUA | va | 11.65M | 2 | 29.02M |
DOGV | va | 301.59M | 2 | 982.33M |
DOGCV | va | 54.92M | 2 | 154.32M |
Several of the downloaded sources have already been used in the FLOR-6.3B training, so the date of data collection for the previous
model has been taken into account and those web pages have been scraped from that date.
Information on the datasets used for training is shown below:
- BOUA: Official Bulletin of the University of Alicante. In this case, we are dealing with documents issued by the University of Alicante in Valencian about grants, calls issued by the university, regulations, resolutions of laws that affect the university environment, and corrections of errors of these same documents issued previously.
- DOGV: Official Journal of the Generalitat Valenciana. This dataset contains official communiqués of different kinds issued by the Generalitat Valenciana, with data entirely in Valencian. It mainly talks about measures taken in the legal field, approval of laws, and public sector communiqués. In this case, we have 18 different documents covering communiqués from 1998 to 2018 and three more recent documents with data from 2019 to 2023.
- DOGCV: in this case, it is the Official Journal of the Generalitat Valenciana, but only the historical documents from 1980 to 1997.
- DSCV: Journal of the Valencian Parliament. This dataset contains transcriptions of the different interventions made during the plenary sessions in the Valencian Parliament by the different participants. It covers data from 2001 to 1999 up to 2022, each transcript comprises a .html file.
- DSCCV: this is a dataset of the Valencian Parliament diary, centered on transcriptions of the different commissions held. As in the previous case, it is separated into one file for each transcription.
### Training parameters
During the training of the model, a high context window was desired when generating text, so it was decided to use an input size of 2048
tokens and a minimum context window of 512 in case of truncating the input sequences. 80% of the data obtained was used for the training stage,
while 20% was used during the evaluation stage. A summary of the parameters used during training can be seen in the following table:
Parameter | Value |
|---------------------|---|
Epochs | 1 |
Learning Rate | 2e-5 |
Warmup Steps | 0 |
Precision | bf-16 |
Weight decay | 1e-1 |
Training Fraction | 0.8 |
Evaluation Fraction | 0.2 |
Input size (tokens) | 2048 |
Minimum context window (tokens) | 512 |
Training time (hours/epoch) | 40 |
### Devices
A total of 4 A100 graphics cards with a maximum capacity of 40 GB each were used to train the model. This meant a training time of approximately
40 hours per epoch. Using a mini-batch size of size 2 and a batch size of size 32 to calculate backpropagation.
### Distributed Training Strategy
A distributed training strategy called Fully Sharded Data Parallel ([FSDP](https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html))
has been used. With this, the entire model has been loaded among the 4 A100s available for training with a mini-batch size of size 2 as
previously discussed.
### Languages
In addition to the data already used for the training of FLOR-6.3B, data completely in **Valencian** from the sources mentioned in
the previous section has been used.
## Evaluation
The model has been evaluated using the loss function and perplexity during the training stage and these metrics have also been
obtained during the evaluation stage. Due to the low amount of data, it was decided to evaluate
at the end of each epoch.
| Epoch | Mode | Loss | Perplexity |
|--------------|------------|----------|-------------|
| 1 | Training | 0.6944 | 2.111 |
| 1 | Evaluation | 0.247 | 1.28 |
| 2 | Training | 0.5335 | 1.705 |
| 2 | Evaluation | 0.4004 | 1.007 |
| 3 | Training | 0.4768 | 1.611 |
| 3 | Evaluation | 0.9141 | 1.007 |
| 4 | Training | 0.4586 | 1.582 |
| 4 | Evaluation | 0.125 | 1.007 |
### Results
In the following table, we can see the results obtained with different benchmarks in comparison with
the model used for continuous pre-training. The results have been obtained from the model pre-trained;
no instruction tuning or fine-tuning of any kind has been performed.
| Dataset | Lang. | Task | Metric | Aitana-6.3B | Flor-6.3B |
|------------------------------|--------|---------------------------|---------|-------------|-------------|
| Belebele Cat_latn | ca | Reading Comprehension | acc | **24.33** | 21.89 |
| CATCOLA | ca | Linguistic Acceptability | mcc | -0.04 | **0.04** |
| COPA | ca | Commonsense Reasoning | acc | 75.6 | **76.8** |
| XStoryCloze | ca | Commonsense Reasoning | f1 | **72.14** | 70.88 |
| OpenBookQA | ca | Question Answering | acc | **33.4** | **33.4** |
| Parafraseja | ca | Paraphrasing | acc | 61.7 | **62.38** |
| PAWS-X | ca | Paraphrasing | acc | 58.55 | **60.75** |
| PiQA | ca | Question Answering | acc | 69.8 | **70.51** |
| SiQA | ca | Question Answering | acc | 45.91 | **47.34** |
| ARC Easy | ca | Question Answering | acc | **63.93** | 59.68 |
| ARC Challenge | ca | Question Answering | acc | 33.45 | **33.53** |
| XQuAD | ca | Question Answering | f1 | 59.36 | **59.74** |
| COQCAT | ca | Question Answering | f1 | 63.42 | **66.2** |
| CatalanQA | ca | Question Answering | f1 | 71.42 | **73.24** |
| XNLI | ca | Natural Language Inference| acc | 48.8 | **50.24** |
| Teca | ca | Natural Language Inference| acc | 46.62 | **49.79** |
| WNLI | ca | Natural Language Inference| acc | **57.75** | 54.93 |
| caBreu Extractive | ca | Summarization | rouge1 | **50.94** | 36.21 |
| caBreu Abstractive | ca | Summarization | bleu | 5.27 | **7.11** |
| caBreu Extreme | ca | Summarization | bleu | 1.72 | **4.4** |
| Mgsm direct | ca | Math |exact match | **0.03** | 0 |
| VeritasQA Gen | ca | Truthfulness | bleu | 4.18 | **21.56**|
| VeritasQA MC1 | ca | Truthfulness | acc | **23.18** | 22.35 |
| VeritasQA MC2 | ca | Truthfulness | acc | 34.95 | **35.19**|
| Phrases ca-va | ca/va| Translation - Adaptation | bleu | 89.12 | **90.3** |
| Phrases va-ca | ca/va| Translation - Adaptation | bleu | **93.23** | **92.99**|
| Belebele Cat_latn | es | Reading Comprehension | acc | **25.56** | 22.33 |
| PAWS | es | Paraphrasing | acc | 56.5 | **57.5** |
| Escola | es | Paraphrasing | acc | **0.02** | 0 |
| XStoryCloze | es | Commonsense Reasoning | f1 | 68.46 | **69.76** |
| XQuAD | es | Question Answering | f1 | 58.85 | **63.59** |
| XLSum | es | Summarization | bleu | 0.88 | **1.79** |
| MGSM Direct | es | Math |exact match | **0.02** | 0 |
| VeritasQA Gen | es | Truthfulness | bleu | 13.57 | **22.11**|
| VeritasQA MC1 | es | Truthfulness | acc | **23.46** | 21.51 |
| VeritasQA MC2 | es | Truthfulness | acc | **37.52** | 34.74|
| XNLI | es | Natural Language Inference| acc | 46.67 | **47.87**|
| WNLI | es | Natural Language Inference| acc | 53.52 | **56.34** |
| Phrases es-va | es/va| Translation | bleu | 70.28 | **70.52**|
| Phrases va-es | va/es| Translation | bleu | 79.63 | **79.87**|
## Additional information
### Author
Language and Information System Group [GPLSI](https://gplsi.dlsi.ua.es/)
### Contact
For further information, please send an email to [GPLSI](https://gplsi.dlsi.ua.es/)
### Copyright
Copyright(c) 2024 by GPLSI(https://gplsi.dlsi.ua.es/).
### License
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by [ILENIA](https://proyectoilenia.es/)-[VIVES](https://vives.gplsi.es/) project <<2022/TL22/00215334>>
### Disclaimer
The model published in this repository is intended for a generalist purpose and is available to third parties under a permissive Apache License, Version 2.0.
Be aware that the model may have biases and/or any other undesirable distortions.
When third parties deploy or provide systems and/or services to other parties using this model (or any system based on it) or become users of the model, they should note that it is their responsibility to mitigate the risks arising from its use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the model (GPLSI) be liable for any results arising from the use made by third parties.
|
[
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION",
"PARAPHRASING"
] |
Non_BioNLP
|
prithivMLmods/Delta-Pavonis-Qwen-14B
|
prithivMLmods
|
text-generation
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,741,946,644,000 | 2025-03-16T10:24:58 | 238 | 3 |
---
base_model:
- prithivMLmods/Calcium-Opus-14B-Elite2-R1
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- text-generation-inference
- trl
- sft
- Qwen
- Distill
---

# **Delta-Pavonis-Qwen-14B**
> Delta-Pavonis-Qwen-14B is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. This model is optimized for general-purpose reasoning and answering, excelling in contextual understanding, logical deduction, and multi-step problem-solving. It has been fine-tuned using a long chain-of-thought reasoning model and specialized datasets to improve comprehension, structured responses, and conversational intelligence.
## **Key Improvements**
1. **Enhanced General Knowledge**: The model provides broad knowledge across various domains, improving capabilities in answering questions accurately and generating coherent responses.
2. **Improved Instruction Following**: Significant advancements in understanding and following complex instructions, generating structured responses, and maintaining coherence over extended interactions.
3. **Versatile Adaptability**: More resilient to diverse prompts, enhancing its ability to handle a wide range of topics and conversation styles, including open-ended and structured inquiries.
4. **Long-Context Support**: Supports up to 128K tokens for input context and can generate up to 8K tokens in a single output, making it ideal for detailed responses.
## **Quickstart with transformers**
Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and generate content:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Delta-Pavonis-Qwen-14B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "What are the key principles of general-purpose AI?"
messages = [
{"role": "system", "content": "You are a helpful assistant capable of answering a wide range of questions."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## **Intended Use**
1. **General-Purpose Reasoning**:
Designed for broad applicability, assisting with logical reasoning, answering diverse questions, and solving general knowledge problems.
2. **Educational and Informational Assistance**:
Suitable for providing explanations, summaries, and research-based responses for students, educators, and general users.
3. **Conversational AI and Chatbots**:
Ideal for building intelligent conversational agents that require contextual understanding and dynamic response generation.
4. **Multilingual Applications**:
Supports global communication, translations, and multilingual content generation.
5. **Structured Data Processing**:
Capable of analyzing and generating structured outputs, such as tables and JSON, useful for data science and automation.
6. **Long-Form Content Generation**:
Can generate extended responses, including articles, reports, and guides, maintaining coherence over large text outputs.
## **Limitations**
1. **Hardware Requirements**:
Requires high-memory GPUs or TPUs due to its large parameter size and long-context support.
2. **Potential Bias in Responses**:
While designed to be neutral, outputs may still reflect biases present in training data.
3. **Inconsistent Outputs in Creative Tasks**:
May produce variable results in storytelling and highly subjective topics.
4. **Limited Real-World Awareness**:
Does not have access to real-time events beyond its training cutoff.
5. **Error Propagation in Extended Outputs**:
Minor errors in early responses may affect overall coherence in long-form outputs.
6. **Prompt Sensitivity**:
The effectiveness of responses may depend on how well the input prompt is structured.
|
[
"TRANSLATION"
] |
Non_BioNLP
|
aroot/mbart-finetuned-eng-kor-22045430821
|
aroot
|
translation
|
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,688,147,059,000 | 2023-06-30T18:00:59 | 12 | 0 |
---
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: mbart-finetuned-eng-kor-22045430821
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-22045430821
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1052
- Bleu: 5.7445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.0
|
[
"TRANSLATION"
] |
Non_BioNLP
|
TransferGraph/chiragasarpota_scotus-bert-finetuned-lora-ag_news
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:ag_news",
"base_model:chiragasarpota/scotus-bert",
"base_model:adapter:chiragasarpota/scotus-bert",
"license:apache-2.0",
"model-index",
"region:us"
] | 1,709,074,416,000 | 2024-02-28T00:42:37 | 0 | 0 |
---
base_model: chiragasarpota/scotus-bert
datasets:
- ag_news
library_name: peft
license: apache-2.0
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: chiragasarpota_scotus-bert-finetuned-lora-ag_news
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: ag_news
type: ag_news
config: default
split: test
args: default
metrics:
- type: accuracy
value: 0.5328947368421053
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chiragasarpota_scotus-bert-finetuned-lora-ag_news
This model is a fine-tuned version of [chiragasarpota/scotus-bert](https://huggingface.co/chiragasarpota/scotus-bert) on the ag_news dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.5329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.25 | None | 0 |
| 0.4224 | 1.3217 | 0 |
| 0.4997 | 1.2231 | 1 |
| 0.5276 | 1.1802 | 2 |
| 0.5329 | 1.1677 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
Cran-May/tempemotacilla-eridanus-0302
|
Cran-May
|
text-generation
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"trl",
"r999",
"conversational",
"en",
"zh",
"base_model:prithivMLmods/Pegasus-Opus-14B-Exp",
"base_model:finetune:prithivMLmods/Pegasus-Opus-14B-Exp",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,740,889,264,000 | 2025-03-02T04:21:05 | 25 | 0 |
---
base_model:
- prithivMLmods/Pegasus-Opus-14B-Exp
language:
- en
- zh
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- text-generation-inference
- trl
- r999
model-index:
- name: Eridanus-Opus-14B-r999
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 63.86
name: averaged accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FEridanus-Opus-14B-r999
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 51.04
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FEridanus-Opus-14B-r999
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 38.6
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FEridanus-Opus-14B-r999
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 19.24
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FEridanus-Opus-14B-r999
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 19.48
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FEridanus-Opus-14B-r999
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 48.46
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FEridanus-Opus-14B-r999
name: Open LLM Leaderboard
---

# **Eridanus-Opus-14B-r999**
Eridanus-Opus-14B-r999 is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. This model is optimized for general-purpose reasoning and answering, excelling in contextual understanding, logical deduction, and multi-step problem-solving. It has been fine-tuned using a long chain-of-thought reasoning model and specialized datasets to improve comprehension, structured responses, and conversational intelligence.
## **Key Improvements**
1. **Enhanced General Knowledge**: The model provides broad knowledge across various domains, improving capabilities in answering questions accurately and generating coherent responses.
2. **Improved Instruction Following**: Significant advancements in understanding and following complex instructions, generating structured responses, and maintaining coherence over extended interactions.
3. **Versatile Adaptability**: More resilient to diverse prompts, enhancing its ability to handle a wide range of topics and conversation styles, including open-ended and structured inquiries.
4. **Long-Context Support**: Supports up to 128K tokens for input context and can generate up to 8K tokens in a single output, making it ideal for detailed responses.
5. **Multilingual Proficiency**: Supports over 29 languages, including English, Chinese, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
## **Quickstart with transformers**
Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and generate content:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Eridanus-Opus-14B-r999"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "What are the key principles of general-purpose AI?"
messages = [
{"role": "system", "content": "You are a helpful assistant capable of answering a wide range of questions."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## **Intended Use**
1. **General-Purpose Reasoning**:
Designed for broad applicability, assisting with logical reasoning, answering diverse questions, and solving general knowledge problems.
2. **Educational and Informational Assistance**:
Suitable for providing explanations, summaries, and research-based responses for students, educators, and general users.
3. **Conversational AI and Chatbots**:
Ideal for building intelligent conversational agents that require contextual understanding and dynamic response generation.
4. **Multilingual Applications**:
Supports global communication, translations, and multilingual content generation.
5. **Structured Data Processing**:
Capable of analyzing and generating structured outputs, such as tables and JSON, useful for data science and automation.
6. **Long-Form Content Generation**:
Can generate extended responses, including articles, reports, and guides, maintaining coherence over large text outputs.
## **Limitations**
1. **Hardware Requirements**:
Requires high-memory GPUs or TPUs due to its large parameter size and long-context support.
2. **Potential Bias in Responses**:
While designed to be neutral, outputs may still reflect biases present in training data.
3. **Inconsistent Outputs in Creative Tasks**:
May produce variable results in storytelling and highly subjective topics.
4. **Limited Real-World Awareness**:
Does not have access to real-time events beyond its training cutoff.
5. **Error Propagation in Extended Outputs**:
Minor errors in early responses may affect overall coherence in long-form outputs.
6. **Prompt Sensitivity**:
The effectiveness of responses may depend on how well the input prompt is structured.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/prithivMLmods__Eridanus-Opus-14B-r999-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=prithivMLmods%2FEridanus-Opus-14B-r999&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
| Metric |Value (%)|
|-------------------|--------:|
|**Average** | 40.11|
|IFEval (0-Shot) | 63.86|
|BBH (3-Shot) | 51.04|
|MATH Lvl 5 (4-Shot)| 38.60|
|GPQA (0-shot) | 19.24|
|MuSR (0-shot) | 19.48|
|MMLU-PRO (5-shot) | 48.46|
|
[
"TRANSLATION"
] |
Non_BioNLP
|
LaTarn/re-clean-setfit-model
|
LaTarn
|
text-classification
|
[
"sentence-transformers",
"safetensors",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,698,969,826,000 | 2023-11-03T00:04:11 | 46 | 0 |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# LaTarn/re-clean-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("LaTarn/re-clean-setfit-model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
nikitakapitan/bert-base-uncased-finetuned-clinc_oos-distilled-clinc_oos
|
nikitakapitan
|
text-classification
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,696,238,726,000 | 2023-10-02T10:19:39 | 15 | 0 |
---
base_model: distilbert-base-uncased
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-clinc_oos-distilled-clinc_oos
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- type: accuracy
value: 0.9158064516129032
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-clinc_oos-distilled-clinc_oos
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7724
- Accuracy: 0.9158
## Model Training Details
| Parameter | Value |
|----------------------|--------------------------------------------------|
| **Task** | text-classification |
| **Teacher Model** | bert-base-uncased-finetuned-clinc_oos |
| **Student Model** | distilbert-base-uncased |
| **Dataset Name** | clinc_oos |
| **Dataset Config** | plus |
| **Evaluation Dataset**| validation |
| **Batch Size** | 48 |
| **Number of Epochs** | 5 |
| **Learning Rate** | 0.00002 |
| **Alpha*** | 1 |
*alpha: (Total_loss = alpha * Loss_CE + (1-alpha) * Loss_KD)
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2762 | 0.7284 |
| 3.7824 | 2.0 | 636 | 1.8624 | 0.8358 |
| 3.7824 | 3.0 | 954 | 1.1512 | 0.8984 |
| 1.6858 | 4.0 | 1272 | 0.8540 | 0.9132 |
| 0.8983 | 5.0 | 1590 | 0.7724 | 0.9158 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
mav23/pythia-1b-GGUF
|
mav23
| null |
[
"gguf",
"pytorch",
"causal-lm",
"pythia",
"en",
"dataset:the_pile",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,732,127,048,000 | 2024-11-20T18:33:58 | 77 | 0 |
---
datasets:
- the_pile
language:
- en
license: apache-2.0
tags:
- pytorch
- causal-lm
- pythia
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-1B
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-1B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-1B to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-1B.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
|
[
"QUESTION_ANSWERING",
"TRANSLATION"
] |
Non_BioNLP
|
MultiBertGunjanPatrick/multiberts-seed-1-160k
|
MultiBertGunjanPatrick
| null |
[
"transformers",
"pytorch",
"bert",
"pretraining",
"exbert",
"multiberts",
"multiberts-seed-1",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:2106.16163",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2021-10-04T04:59:30 | 102 | 0 |
---
datasets:
- bookcorpus
- wikipedia
language: en
license: apache-2.0
tags:
- exbert
- multiberts
- multiberts-seed-1
---
# MultiBERTs Seed 1 Checkpoint 160k (uncased)
Seed 1 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-160k')
model = BertModel.from_pretrained("multiberts-seed-1-160k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
[
"QUESTION_ANSWERING"
] |
Non_BioNLP
|
gaudi/opus-mt-fr-swc-ctranslate2
|
gaudi
|
translation
|
[
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,721,920,495,000 | 2024-10-19T04:48:59 | 6 | 0 |
---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-fr-swc)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-fr-swc).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-fr-swc --output_dir ./ctranslate2/opus-mt-fr-swc-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-fr-swc-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-fr-swc-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-fr-swc-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-fr-swc) by Helsinki-NLP.
|
[
"TRANSLATION"
] |
Non_BioNLP
|
rezashkv/diffusion_pruning
|
rezashkv
|
text-to-image
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"en",
"arxiv:2406.12042",
"license:mit",
"region:us"
] | 1,718,317,784,000 | 2024-06-19T03:10:07 | 0 | 0 |
---
language:
- en
license: mit
tags:
- text-to-image
- stable-diffusion
- diffusers
---
# APTP: Adaptive Prompt-Tailored Pruning of T2I Diffusion Models
[](https://arxiv.org/abs/2406.12042)
[](https://github.com/rezashkv/diffusion_pruning)
The implementation of the paper ["Not All Prompts Are Made Equal: Prompt-based Pruning of Text-to-Image Diffusion Models"](https://arxiv.org/abs/2406.12042)
## Abstract
Text-to-image (T2I) diffusion models have demonstrated impressive image generation capabilities. Still, their computational intensity prohibits
resource-constrained organizations from deploying T2I models after fine-tuning them on their internal target data. While pruning
techniques offer a potential solution to reduce the computational burden of T2I models, static pruning methods use the same pruned
model for all input prompts, overlooking the varying capacity requirements of different prompts. Dynamic pruning addresses this issue by utilizing
a separate sub-network for each prompt, but it prevents batch parallelism on GPUs. To overcome these limitations, we introduce
Adaptive Prompt-Tailored Pruning (APTP), a novel prompt-based pruning method designed for T2I diffusion models. Central to our approach is a
prompt router model, which learns to determine the required capacity for an input text prompt and routes it to an architecture code, given a
total desired compute budget for prompts. Each architecture code represents a specialized model tailored to the prompts assigned to it, and the
number of codes is a hyperparameter. We train the prompt router and architecture codes using contrastive learning, ensuring that similar prompts
are mapped to nearby codes. Further, we employ optimal transport to prevent the codes from collapsing into a single one. We demonstrate APTP's
effectiveness by pruning Stable Diffusion (SD) V2.1 using CC3M and COCO as target datasets. APTP outperforms the
single-model pruning baselines in terms of FID, CLIP, and CMMD scores. Our analysis of the clusters learned by APTP reveals they
are semantically meaningful. We also show that APTP can automatically discover previously empirically found challenging prompts for SD, e.g., prompts for generating text images, assigning them to higher capacity codes.
<p align="center">
<img src="assets/fig_1.gif" alt="APTP Overview" width="600" />
</p>
<p align="left">
<em>APTP: We prune a text-to-image diffusion model like Stable Diffusion (left) into a mixture of efficient experts (right) in a prompt-based manner. Our prompt router routes distinct types of prompts to different experts, allowing experts' architectures to be separately specialized by removing layers or channels.</em>
</p>
<p align="center">
<img src="assets/fig_2.gif" alt="APTP Pruning Scheme" width="600" />
</p>
<p align="left">
<em>APTP pruning scheme. We train the prompt router and the set of architecture codes to prune a T2I diffusion model into a mixture of experts. The prompt router consists of three modules. We use a Sentence Transformer as the prompt encoder to encode the input prompt into a representation z. Then, the architecture predictor transforms z into the architecture embedding e that has the same dimensionality as architecture codes. Finally, the router routes the embedding e into an architecture code a(i). We use optimal transport to evenly distribute the prompts in a training batch among the architecture codes. The architecture code a(i) = (u(i), v(i)) determines pruning the model’s width and depth. We train the prompt router’s parameters and architecture codes in an end-to-end manner using the denoising objective of the pruned model L<sub>DDPM</sub>, distillation loss between the pruned and original models L<sub>distill</sub>, average resource usage for the samples in the batch R, and contrastive objective L<sub>cont</sub>, encouraging embeddings e preserving semantic similarity of the representations z.</em>
</p>
### Model Description
- **Developed by:** UMD Efficiency Group
- **Model type:** Text-to-Image Diffusion Model
- **Model Description:** APTP is a pruning scheme for text-to-image diffusion models like Stable Diffusion, resulting in a mixture of efficient experts specialized for different prompt types.
### License
APTP is released under the MIT License. Please see the [LICENSE](LICENSE) file for details.
## Training Dataset
We used Conceptual Captions and MS-COCO 2014 datasets for training the models. Details for downloading and preparing these datasets are provided in the [Github Repository](https://github.com/rezashkv/diffusion_pruning).
## File Structure
```
APTP
├── APTP-Base-CC3M
│ ├── arch0
│ ├── ...
│ └── arch15
├── APTP-Small-CC3M
│ ├── arch0
│ ├── ...
│ └── arch7
├── APTP-Base-COCO
│ ├── arch0
│ ├── ...
│ └── arch7
└── APTP-Small-COCO
├── arch0
├── ...
└── arch7
```
## Simple Inference Example
Make sure you follow the [provided instructions](https://github.com/rezashkv/diffusion_pruning?tab=readme-ov-file#installation) to install pdm from source.
```python
from diffusers import StableDiffusionPipeline, PNDMScheduler
from pdm.models import HyperStructure, StructureVectorQuantizer, UNet2DConditionModelPruned
from pdm.utils.data_utils import get_mpnet_embeddings
from transformers import AutoTokenizer, AutoModel
import torch
prompt_encoder_model_name_or_path = "sentence-transformers/all-mpnet-base-v2"
aptp_model_name_or_path = f"rezashkv/APTP"
aptp_variant = "APTP-Base-CC3M"
sd_model_name_or_path = "stabilityai/stable-diffusion-2-1"
prompt_encoder = AutoModel.from_pretrained(prompt_encoder_model_name_or_path)
prompt_encoder_tokenizer = AutoTokenizer.from_pretrained(prompt_encoder_model_name_or_path)
hyper_net = HyperStructure.from_pretrained(aptp_model_name_or_path, subfolder=f"{aptp_variant}/hypernet")
quantizer = StructureVectorQuantizer.from_pretrained(aptp_model_name_or_path, subfolder=f"{aptp_variant}/quantizer")
prompts = ["a woman on a white background looks down and away from the camera the a forlorn look on her face"]
prompt_embedding = get_mpnet_embeddings(prompts, prompt_encoder, prompt_encoder_tokenizer)
arch_embedding = hyper_net(prompt_embedding)
expert_id = quantizer.get_cosine_sim_min_encoding_indices(arch_embedding)[0].item()
unet = UNet2DConditionModelPruned.from_pretrained(aptp_model_name_or_path,
subfolder=f"{aptp_variant}/arch{expert_id}/checkpoint-30000/unet")
noise_scheduler = PNDMScheduler.from_pretrained(sd_model_name_or_path, subfolder="scheduler")
pipeline = StableDiffusionPipeline.from_pretrained(sd_model_name_or_path, unet=unet, scheduler=noise_scheduler)
pipeline.to('cuda')
generator = torch.Generator(device='cuda').manual_seed(43)
image = pipeline(
prompt=prompts[0],
guidance_scale=7.5,
generator=generator,
output_type='pil',
).images[0]
image.save("image.png")
```
## Uses
This model is designed for academic and research purposes, specifically for exploring the efficiency of text-to-image diffusion models through prompt-based pruning. Potential applications include:
1. **Research:** Researchers can use the model to study prompt-based pruning techniques and their impact on the performance and efficiency of text-to-image generation models.
2. **Education:** Educators and students can use this model as a learning tool for understanding advanced concepts in neural network pruning, diffusion models, and prompt engineering.
3. **Benchmarking:** The model can be used for benchmarking against other text-to-image generation models to assess the trade-offs between computational efficiency and output quality.
## Safety
When using these models, it is important to consider the following safety and ethical guidelines:
1. **Content Generation:** The model can generate a wide range of images based on text prompts. Users should ensure that the generated content adheres to ethical guidelines and does not produce harmful, offensive, or inappropriate images.
2. **Bias and Fairness:** Like other AI models, APTP may exhibit biases present in the training data. Users should be aware of these potential biases and take steps to mitigate their impact, particularly when the model is used in sensitive or critical applications.
3. **Data Privacy:** Ensure that any data used with the model complies with data privacy regulations. Avoid using personally identifiable information (PII) or sensitive data without proper consent.
4. **Responsible Use:** Users are encouraged to use the model responsibly, considering the potential social and ethical implications of their work. This includes avoiding the generation of misleading or false information and respecting the rights and dignity of individuals depicted in generated images.
By adhering to these guidelines, users can help ensure the responsible and ethical use of the APTP model.
## Contact
In case of any questions or issues, please contact the authors of the paper:
* [Reza Shirkavand](mailto:[email protected])
* [Alireza Ganjdanesh](mailto:[email protected])
|
[
"SEMANTIC_SIMILARITY"
] |
Non_BioNLP
|
TransferGraph/Jeevesh8_512seq_len_6ep_bert_ft_cola-91-finetuned-lora-tweet_eval_hate
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:Jeevesh8/512seq_len_6ep_bert_ft_cola-91",
"base_model:adapter:Jeevesh8/512seq_len_6ep_bert_ft_cola-91",
"model-index",
"region:us"
] | 1,709,214,108,000 | 2024-02-29T13:41:51 | 0 | 0 |
---
base_model: Jeevesh8/512seq_len_6ep_bert_ft_cola-91
datasets:
- tweet_eval
library_name: peft
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: Jeevesh8_512seq_len_6ep_bert_ft_cola-91-finetuned-lora-tweet_eval_hate
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: hate
split: validation
args: hate
metrics:
- type: accuracy
value: 0.73
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Jeevesh8_512seq_len_6ep_bert_ft_cola-91-finetuned-lora-tweet_eval_hate
This model is a fine-tuned version of [Jeevesh8/512seq_len_6ep_bert_ft_cola-91](https://huggingface.co/Jeevesh8/512seq_len_6ep_bert_ft_cola-91) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.73
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.45 | None | 0 |
| 0.68 | 0.6743 | 0 |
| 0.723 | 0.5277 | 1 |
| 0.718 | 0.4791 | 2 |
| 0.73 | 0.4581 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
Language-Media-Lab/mt5-small-ain-jpn-mt
|
Language-Media-Lab
|
translation
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"translation",
"jpn",
"ain",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2022-02-04T13:20:55 | 119 | 0 |
---
language:
- jpn
- ain
tags:
- translation
---
mt5-small-ain-jpn-mt is a machine translation model pretrained with [Google's mT5-small](https://huggingface.co/google/mt5-small) and fine-tuned on bilingual datasets crawled from the Web. It translates Ainu language to Japanese.
|
[
"TRANSLATION"
] |
Non_BioNLP
|
jingyeom/korean_embedding_model
|
jingyeom
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"mteb",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,705,279,515,000 | 2024-01-15T00:48:35 | 0 | 1 |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: korean_embedding_model
results:
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 62.462024005162874
- type: cos_sim_spearman
value: 59.04592371468026
- type: euclidean_pearson
value: 60.118409297960774
- type: euclidean_spearman
value: 59.04592371468026
- type: manhattan_pearson
value: 59.6758261833799
- type: manhattan_spearman
value: 59.10255151100711
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 69.54306440280438
- type: cos_sim_spearman
value: 62.859142390813574
- type: euclidean_pearson
value: 65.6949193466544
- type: euclidean_spearman
value: 62.859152754778854
- type: manhattan_pearson
value: 65.65986839533139
- type: manhattan_spearman
value: 62.82868162534342
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 66.06384755873458
- type: cos_sim_spearman
value: 62.589736136651894
- type: euclidean_pearson
value: 62.78577890775041
- type: euclidean_spearman
value: 62.588858379781634
- type: manhattan_pearson
value: 62.827478623777985
- type: manhattan_spearman
value: 62.617997229102706
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 71.86398880834443
- type: cos_sim_spearman
value: 72.1348002553312
- type: euclidean_pearson
value: 71.6796109730168
- type: euclidean_spearman
value: 72.1349022685911
- type: manhattan_pearson
value: 71.66477952415218
- type: manhattan_spearman
value: 72.09093373400123
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 70.22680219584427
- type: cos_sim_spearman
value: 67.0818395499375
- type: euclidean_pearson
value: 68.24498247750782
- type: euclidean_spearman
value: 67.0818306104199
- type: manhattan_pearson
value: 68.23186143435814
- type: manhattan_spearman
value: 67.06973319437314
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 75.54853695205654
- type: cos_sim_spearman
value: 75.93775396598934
- type: euclidean_pearson
value: 75.10618334577337
- type: euclidean_spearman
value: 75.93775372510834
- type: manhattan_pearson
value: 75.123200749426
- type: manhattan_spearman
value: 75.95755907955946
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 70.22928051288379
- type: cos_sim_spearman
value: 70.13385961598065
- type: euclidean_pearson
value: 69.66948135244029
- type: euclidean_spearman
value: 70.13385923761084
- type: manhattan_pearson
value: 69.66975130970742
- type: manhattan_spearman
value: 70.16415157887303
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 77.12344529924287
- type: cos_sim_spearman
value: 77.13355009366349
- type: euclidean_pearson
value: 77.73092283054677
- type: euclidean_spearman
value: 77.13355009366349
- type: manhattan_pearson
value: 77.59037018668798
- type: manhattan_spearman
value: 77.00181739561044
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 60.402875441797896
- type: cos_sim_spearman
value: 62.21971197434699
- type: euclidean_pearson
value: 63.08540172189354
- type: euclidean_spearman
value: 62.21971197434699
- type: manhattan_pearson
value: 62.971870200624714
- type: manhattan_spearman
value: 62.17079870601948
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 69.14110875934769
- type: cos_sim_spearman
value: 67.83869999603111
- type: euclidean_pearson
value: 68.32930987602938
- type: euclidean_spearman
value: 67.8387112205369
- type: manhattan_pearson
value: 68.385068161592
- type: manhattan_spearman
value: 67.86635507968924
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.185534982566132
- type: cos_sim_spearman
value: 28.71714958933386
- type: dot_pearson
value: 29.185527195235316
- type: dot_spearman
value: 28.71714958933386
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
TransferGraph/dhimskyy_wiki-bert-finetuned-lora-tweet_eval_emotion
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:dhimskyy/wiki-bert",
"base_model:adapter:dhimskyy/wiki-bert",
"model-index",
"region:us"
] | 1,709,211,031,000 | 2024-02-29T12:50:33 | 0 | 0 |
---
base_model: dhimskyy/wiki-bert
datasets:
- tweet_eval
library_name: peft
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: dhimskyy_wiki-bert-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
split: validation
args: emotion
metrics:
- type: accuracy
value: 0.43315508021390375
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dhimskyy_wiki-bert-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [dhimskyy/wiki-bert](https://huggingface.co/dhimskyy/wiki-bert) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.2353 | None | 0 |
| 0.4251 | 1.2739 | 0 |
| 0.4305 | 1.2626 | 1 |
| 0.4278 | 1.2564 | 2 |
| 0.4332 | 1.2526 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
RichardErkhov/Qwen_-_Qwen2-0.5B-4bits
|
RichardErkhov
| null |
[
"safetensors",
"qwen2",
"4-bit",
"bitsandbytes",
"region:us"
] | 1,730,295,381,000 | 2024-10-30T13:36:50 | 4 | 0 |
---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen2-0.5B - bnb 4bits
- Model creator: https://huggingface.co/Qwen/
- Original model: https://huggingface.co/Qwen/Qwen2-0.5B/
Original model description:
---
language:
- en
pipeline_tag: text-generation
tags:
- pretrained
license: apache-2.0
new_version: Qwen/Qwen2.5-0.5B
---
# Qwen2-0.5B
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the 0.5B Qwen2 base language model.
Compared with the state-of-the-art opensource language models, including the previous released Qwen1.5, Qwen2 has generally surpassed most opensource models and demonstrated competitiveness against proprietary models across a series of benchmarks targeting for language understanding, language generation, multilingual capability, coding, mathematics, reasoning, etc.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2/), [GitHub](https://github.com/QwenLM/Qwen2), and [Documentation](https://qwen.readthedocs.io/en/latest/).
<br>
## Model Details
Qwen2 is a language model series including decoder language models of different model sizes. For each size, we release the base language model and the aligned chat model. It is based on the Transformer architecture with SwiGLU activation, attention QKV bias, group query attention, etc. Additionally, we have an improved tokenizer adaptive to multiple natural languages and codes.
## Requirements
The code of Qwen2 has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`, or you might encounter the following error:
```
KeyError: 'qwen2'
```
## Usage
We do not advise you to use base language models for text generation. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model.
## Performance
The evaluation of base models mainly focuses on the model performance of natural language understanding, general question answering, coding, mathematics, scientific knowledge, reasoning, multilingual capability, etc.
The datasets for evaluation include:
**English Tasks**: MMLU (5-shot), MMLU-Pro (5-shot), GPQA (5shot), Theorem QA (5-shot), BBH (3-shot), HellaSwag (10-shot), Winogrande (5-shot), TruthfulQA (0-shot), ARC-C (25-shot)
**Coding Tasks**: EvalPlus (0-shot) (HumanEval, MBPP, HumanEval+, MBPP+), MultiPL-E (0-shot) (Python, C++, JAVA, PHP, TypeScript, C#, Bash, JavaScript)
**Math Tasks**: GSM8K (4-shot), MATH (4-shot)
**Chinese Tasks**: C-Eval(5-shot), CMMLU (5-shot)
**Multilingual Tasks**: Multi-Exam (M3Exam 5-shot, IndoMMLU 3-shot, ruMMLU 5-shot, mMMLU 5-shot), Multi-Understanding (BELEBELE 5-shot, XCOPA 5-shot, XWinograd 5-shot, XStoryCloze 0-shot, PAWS-X 5-shot), Multi-Mathematics (MGSM 8-shot), Multi-Translation (Flores-101 5-shot)
#### Qwen2-0.5B & Qwen2-1.5B performances
| Datasets | Phi-2 | Gemma-2B | MiniCPM | Qwen1.5-1.8B | Qwen2-0.5B | Qwen2-1.5B |
| :--------| :---------: | :------------: | :------------: |:------------: | :------------: | :------------: |
|#Non-Emb Params | 2.5B | 2.0B | 2.4B | 1.3B | 0.35B | 1.3B |
|MMLU | 52.7 | 42.3 | 53.5 | 46.8 | 45.4 | **56.5** |
|MMLU-Pro | - | 15.9 | - | - | 14.7 | 21.8 |
|Theorem QA | - | - | - |- | 8.9 | **15.0** |
|HumanEval | 47.6 | 22.0 |**50.0**| 20.1 | 22.0 | 31.1 |
|MBPP | **55.0** | 29.2 | 47.3 | 18.0 | 22.0 | 37.4 |
|GSM8K | 57.2 | 17.7 | 53.8 | 38.4 | 36.5 | **58.5** |
|MATH | 3.5 | 11.8 | 10.2 | 10.1 | 10.7 | **21.7** |
|BBH | **43.4** | 35.2 | 36.9 | 24.2 | 28.4 | 37.2 |
|HellaSwag | **73.1** | 71.4 | 68.3 | 61.4 | 49.3 | 66.6 |
|Winogrande | **74.4** | 66.8 | -| 60.3 | 56.8 | 66.2 |
|ARC-C | **61.1** | 48.5 | -| 37.9 | 31.5 | 43.9 |
|TruthfulQA | 44.5 | 33.1 | -| 39.4 | 39.7 | **45.9** |
|C-Eval | 23.4 | 28.0 | 51.1| 59.7 | 58.2 | **70.6** |
|CMMLU | 24.2 | - | 51.1 | 57.8 | 55.1 | **70.3** |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{qwen2,
title={Qwen2 Technical Report},
year={2024}
}
```
|
[
"QUESTION_ANSWERING",
"TRANSLATION"
] |
Non_BioNLP
|
TransferGraph/nurkayevaa_autonlp-bert-covid-407910458-finetuned-lora-tweet_eval_sentiment
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:nurkayevaa/autonlp-bert-covid-407910458",
"base_model:adapter:nurkayevaa/autonlp-bert-covid-407910458",
"model-index",
"region:us"
] | 1,709,212,132,000 | 2024-02-29T13:08:54 | 0 | 0 |
---
base_model: nurkayevaa/autonlp-bert-covid-407910458
datasets:
- tweet_eval
library_name: peft
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: nurkayevaa_autonlp-bert-covid-407910458-finetuned-lora-tweet_eval_sentiment
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: sentiment
split: validation
args: sentiment
metrics:
- type: accuracy
value: 0.707
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nurkayevaa_autonlp-bert-covid-407910458-finetuned-lora-tweet_eval_sentiment
This model is a fine-tuned version of [nurkayevaa/autonlp-bert-covid-407910458](https://huggingface.co/nurkayevaa/autonlp-bert-covid-407910458) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.3165 | None | 0 |
| 0.7005 | 0.7344 | 0 |
| 0.6975 | 0.6591 | 1 |
| 0.701 | 0.6363 | 2 |
| 0.707 | 0.6200 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
north/t5_large_NCC
|
north
|
text2text-generation
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"no",
"nn",
"sv",
"dk",
"is",
"en",
"dataset:nbailab/NCC",
"dataset:mc4",
"dataset:wikipedia",
"arxiv:2104.09617",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,653,133,590,000 | 2022-10-13T13:54:32 | 26 | 1 |
---
datasets:
- nbailab/NCC
- mc4
- wikipedia
language:
- false
- nn
- sv
- dk
- is
- en
license: apache-2.0
widget:
- text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>.
Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>,
må over halvparten av regjeringens <extra_id_4> være til stede.
- text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2>
seg ned og lese den.
---
The North-T5-models are a set of Norwegian and Scandinavian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation.
| |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_|
|:-----------|:------------:|:------------:|:------------:|:------------:|:------------:|
|North-T5‑NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|✔|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)||
|North-T5‑NCC‑lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)||
## T5X Checkpoint
The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/large/norwegian_NCC_plus_English_t5x_large/).
## Performance
A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617).
|**Model:** | **F1** |
|:-----------|:------------|
|mT5-base|73.2 |
|mBERT-base|78.4 |
|NorBERT-base|78.2 |
|North-T5-small|80.5 |
|nb-bert-base|81.8 |
|North-T5-base|85.3 |
|North-T5-large|86.7 |
|North-T5-xl|88.7 |
|North-T5-xxl|91.8|
These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal.
## Sub-versions of North-T5
The following sub-versions are available. More versions will be available shorter.
|**Model** | **Description** |
|:-----------|:-------|
|**North‑T5‑NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.|
|**North‑T5‑NCC‑lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.|
## Fine-tuned versions
As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab.
Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used.
* Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base)
* DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base)
## Training details
All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources.
All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks.
While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab.
All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning.
## Formats
All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format.
## Future
I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users
## Thanks
This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running.
Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models.
Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X.
## Warranty
Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases.
## Contact/About
These models were trained by Per E Kummervold. Please contact me on [email protected].
|
[
"TRANSLATION"
] |
Non_BioNLP
|
mmcquade11-test/reuters-summarization
|
mmcquade11-test
|
text2text-generation
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autonlp",
"en",
"dataset:mmcquade11/autonlp-data-reuters-summarization",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2021-11-30T21:43:51 | 16 | 0 |
---
datasets:
- mmcquade11/autonlp-data-reuters-summarization
language: en
tags:
- a
- u
- t
- o
- n
- l
- p
widget:
- text: I love AutoNLP 🤗
co2_eq_emissions: 286.4350821612984
---
This is an autoNLP model I trained on Reuters dataset
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 34018133
- CO2 Emissions (in grams): 286.4350821612984
## Validation Metrics
- Loss: 1.1805976629257202
- Rouge1: 55.4013
- Rouge2: 30.8004
- RougeL: 52.57
- RougeLsum: 52.6103
- Gen Len: 15.3458
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/mmcquade11/autonlp-reuters-summarization-34018133
```
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
microsoft/prophetnet-large-uncased-cnndm
|
microsoft
|
text2text-generation
|
[
"transformers",
"pytorch",
"rust",
"prophetnet",
"text2text-generation",
"en",
"dataset:cnn_dailymail",
"arxiv:2001.04063",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2023-01-24T16:56:43 | 965 | 2 |
---
datasets:
- cnn_dailymail
language: en
---
## prophetnet-large-uncased-cnndm
Fine-tuned weights(converted from [original fairseq version repo](https://github.com/microsoft/ProphetNet)) for [ProphetNet](https://arxiv.org/abs/2001.04063) on summarization task CNN/DailyMail.
ProphetNet is a new pre-trained language model for sequence-to-sequence learning with a novel self-supervised objective called future n-gram prediction.
ProphetNet is able to predict more future tokens with a n-stream decoder. The original implementation is Fairseq version at [github repo](https://github.com/microsoft/ProphetNet).
### Usage
```
from transformers import ProphetNetTokenizer, ProphetNetForConditionalGeneration, ProphetNetConfig
model = ProphetNetForConditionalGeneration.from_pretrained('microsoft/prophetnet-large-uncased-cnndm')
tokenizer = ProphetNetTokenizer.from_pretrained('microsoft/prophetnet-large-uncased-cnndm')
ARTICLE_TO_SUMMARIZE = "USTC was founded in Beijing by the Chinese Academy of Sciences (CAS) in September 1958. The Director of CAS, Mr. Guo Moruo was appointed the first president of USTC. USTC's founding mission was to develop a high-level science and technology workforce, as deemed critical for development of China's economy, defense, and science and technology education. The establishment was hailed as \"A Major Event in the History of Chinese Education and Science.\" CAS has supported USTC by combining most of its institutes with the departments of the university. USTC is listed in the top 16 national key universities, becoming the youngest national key university.".lower()
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=100, return_tensors='pt')
# Generate Summary
summary_ids = model.generate(inputs['input_ids'], num_beams=4, max_length=512, early_stopping=True)
tokenizer.batch_decode(summary_ids, skip_special_tokens=True)
# should give: 'ustc was founded in beijing by the chinese academy of sciences in 1958. [X_SEP] ustc\'s mission was to develop a high - level science and technology workforce. [X_SEP] the establishment was hailed as " a major event in the history of chinese education and science "'
```
Here, [X_SEP] is used as a special token to seperate sentences.
### Citation
```bibtex
@article{yan2020prophetnet,
title={Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training},
author={Yan, Yu and Qi, Weizhen and Gong, Yeyun and Liu, Dayiheng and Duan, Nan and Chen, Jiusheng and Zhang, Ruofei and Zhou, Ming},
journal={arXiv preprint arXiv:2001.04063},
year={2020}
}
```
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
mtsdurica/madlad400-3b-mt-Q4_0-GGUF
|
mtsdurica
|
translation
|
[
"transformers",
"gguf",
"text2text-generation",
"text-generation-inference",
"llama-cpp",
"gguf-my-repo",
"translation",
"multilingual",
"en",
"ru",
"es",
"fr",
"de",
"it",
"pt",
"pl",
"nl",
"vi",
"tr",
"sv",
"id",
"ro",
"cs",
"zh",
"hu",
"ja",
"th",
"fi",
"fa",
"uk",
"da",
"el",
"no",
"bg",
"sk",
"ko",
"ar",
"lt",
"ca",
"sl",
"he",
"et",
"lv",
"hi",
"sq",
"ms",
"az",
"sr",
"ta",
"hr",
"kk",
"is",
"ml",
"mr",
"te",
"af",
"gl",
"fil",
"be",
"mk",
"eu",
"bn",
"ka",
"mn",
"bs",
"uz",
"ur",
"sw",
"yue",
"ne",
"kn",
"kaa",
"gu",
"si",
"cy",
"eo",
"la",
"hy",
"ky",
"tg",
"ga",
"mt",
"my",
"km",
"tt",
"so",
"ku",
"ps",
"pa",
"rw",
"lo",
"ha",
"dv",
"fy",
"lb",
"ckb",
"mg",
"gd",
"am",
"ug",
"ht",
"grc",
"hmn",
"sd",
"jv",
"mi",
"tk",
"ceb",
"yi",
"ba",
"fo",
"or",
"xh",
"su",
"kl",
"ny",
"sm",
"sn",
"co",
"zu",
"ig",
"yo",
"pap",
"st",
"haw",
"as",
"oc",
"cv",
"lus",
"tet",
"gsw",
"sah",
"br",
"rm",
"sa",
"bo",
"om",
"se",
"ce",
"cnh",
"ilo",
"hil",
"udm",
"os",
"lg",
"ti",
"vec",
"ts",
"tyv",
"kbd",
"ee",
"iba",
"av",
"kha",
"to",
"tn",
"nso",
"fj",
"zza",
"ak",
"ada",
"otq",
"dz",
"bua",
"cfm",
"ln",
"chm",
"gn",
"krc",
"wa",
"hif",
"yua",
"srn",
"war",
"rom",
"bik",
"pam",
"sg",
"lu",
"ady",
"kbp",
"syr",
"ltg",
"myv",
"iso",
"kac",
"bho",
"ay",
"kum",
"qu",
"za",
"pag",
"ngu",
"ve",
"pck",
"zap",
"tyz",
"hui",
"bbc",
"tzo",
"tiv",
"ksd",
"gom",
"min",
"ang",
"nhe",
"bgp",
"nzi",
"nnb",
"nv",
"zxx",
"bci",
"kv",
"new",
"mps",
"alt",
"meu",
"bew",
"fon",
"iu",
"abt",
"mgh",
"mnw",
"tvl",
"dov",
"tlh",
"ho",
"kw",
"mrj",
"meo",
"crh",
"mbt",
"emp",
"ace",
"ium",
"mam",
"gym",
"mai",
"crs",
"pon",
"ubu",
"fip",
"quc",
"gv",
"kj",
"btx",
"ape",
"chk",
"rcf",
"shn",
"tzh",
"mdf",
"ppk",
"ss",
"gag",
"cab",
"kri",
"seh",
"ibb",
"tbz",
"bru",
"enq",
"ach",
"cuk",
"kmb",
"wo",
"kek",
"qub",
"tab",
"bts",
"kos",
"rwo",
"cak",
"tuc",
"bum",
"cjk",
"gil",
"stq",
"tsg",
"quh",
"mak",
"arn",
"ban",
"jiv",
"sja",
"yap",
"tcy",
"toj",
"twu",
"xal",
"amu",
"rmc",
"hus",
"nia",
"kjh",
"bm",
"guh",
"mas",
"acf",
"dtp",
"ksw",
"bzj",
"din",
"zne",
"mad",
"msi",
"mag",
"mkn",
"kg",
"lhu",
"ch",
"qvi",
"mh",
"djk",
"sus",
"mfe",
"srm",
"dyu",
"ctu",
"gui",
"pau",
"inb",
"bi",
"mni",
"guc",
"jam",
"wal",
"jac",
"bas",
"gor",
"skr",
"nyu",
"noa",
"sda",
"gub",
"nog",
"cni",
"teo",
"tdx",
"sxn",
"rki",
"nr",
"frp",
"alz",
"taj",
"lrc",
"cce",
"rn",
"jvn",
"hvn",
"nij",
"dwr",
"izz",
"msm",
"bus",
"ktu",
"chr",
"maz",
"tzj",
"suz",
"knj",
"bim",
"gvl",
"bqc",
"tca",
"pis",
"prk",
"laj",
"mel",
"qxr",
"niq",
"ahk",
"shp",
"hne",
"spp",
"koi",
"krj",
"quf",
"luz",
"agr",
"tsc",
"mqy",
"gof",
"gbm",
"miq",
"dje",
"awa",
"bjj",
"qvz",
"sjp",
"tll",
"raj",
"kjg",
"bgz",
"quy",
"cbk",
"akb",
"oj",
"ify",
"mey",
"ks",
"cac",
"brx",
"qup",
"syl",
"jax",
"ff",
"ber",
"tks",
"trp",
"mrw",
"adh",
"smt",
"srr",
"ffm",
"qvc",
"mtr",
"ann",
"aa",
"noe",
"nut",
"gyn",
"kwi",
"xmm",
"msb",
"dataset:allenai/MADLAD-400",
"base_model:google/madlad400-3b-mt",
"base_model:quantized:google/madlad400-3b-mt",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,720,882,897,000 | 2024-07-13T15:01:51 | 45 | 0 |
---
base_model: google/madlad400-3b-mt
datasets:
- allenai/MADLAD-400
language:
- multilingual
- en
- ru
- es
- fr
- de
- it
- pt
- pl
- nl
- vi
- tr
- sv
- id
- ro
- cs
- zh
- hu
- ja
- th
- fi
- fa
- uk
- da
- el
- 'no'
- bg
- sk
- ko
- ar
- lt
- ca
- sl
- he
- et
- lv
- hi
- sq
- ms
- az
- sr
- ta
- hr
- kk
- is
- ml
- mr
- te
- af
- gl
- fil
- be
- mk
- eu
- bn
- ka
- mn
- bs
- uz
- ur
- sw
- yue
- ne
- kn
- kaa
- gu
- si
- cy
- eo
- la
- hy
- ky
- tg
- ga
- mt
- my
- km
- tt
- so
- ku
- ps
- pa
- rw
- lo
- ha
- dv
- fy
- lb
- ckb
- mg
- gd
- am
- ug
- ht
- grc
- hmn
- sd
- jv
- mi
- tk
- ceb
- yi
- ba
- fo
- or
- xh
- su
- kl
- ny
- sm
- sn
- co
- zu
- ig
- yo
- pap
- st
- haw
- as
- oc
- cv
- lus
- tet
- gsw
- sah
- br
- rm
- sa
- bo
- om
- se
- ce
- cnh
- ilo
- hil
- udm
- os
- lg
- ti
- vec
- ts
- tyv
- kbd
- ee
- iba
- av
- kha
- to
- tn
- nso
- fj
- zza
- ak
- ada
- otq
- dz
- bua
- cfm
- ln
- chm
- gn
- krc
- wa
- hif
- yua
- srn
- war
- rom
- bik
- pam
- sg
- lu
- ady
- kbp
- syr
- ltg
- myv
- iso
- kac
- bho
- ay
- kum
- qu
- za
- pag
- ngu
- ve
- pck
- zap
- tyz
- hui
- bbc
- tzo
- tiv
- ksd
- gom
- min
- ang
- nhe
- bgp
- nzi
- nnb
- nv
- zxx
- bci
- kv
- new
- mps
- alt
- meu
- bew
- fon
- iu
- abt
- mgh
- mnw
- tvl
- dov
- tlh
- ho
- kw
- mrj
- meo
- crh
- mbt
- emp
- ace
- ium
- mam
- gym
- mai
- crs
- pon
- ubu
- fip
- quc
- gv
- kj
- btx
- ape
- chk
- rcf
- shn
- tzh
- mdf
- ppk
- ss
- gag
- cab
- kri
- seh
- ibb
- tbz
- bru
- enq
- ach
- cuk
- kmb
- wo
- kek
- qub
- tab
- bts
- kos
- rwo
- cak
- tuc
- bum
- cjk
- gil
- stq
- tsg
- quh
- mak
- arn
- ban
- jiv
- sja
- yap
- tcy
- toj
- twu
- xal
- amu
- rmc
- hus
- nia
- kjh
- bm
- guh
- mas
- acf
- dtp
- ksw
- bzj
- din
- zne
- mad
- msi
- mag
- mkn
- kg
- lhu
- ch
- qvi
- mh
- djk
- sus
- mfe
- srm
- dyu
- ctu
- gui
- pau
- inb
- bi
- mni
- guc
- jam
- wal
- jac
- bas
- gor
- skr
- nyu
- noa
- sda
- gub
- nog
- cni
- teo
- tdx
- sxn
- rki
- nr
- frp
- alz
- taj
- lrc
- cce
- rn
- jvn
- hvn
- nij
- dwr
- izz
- msm
- bus
- ktu
- chr
- maz
- tzj
- suz
- knj
- bim
- gvl
- bqc
- tca
- pis
- prk
- laj
- mel
- qxr
- niq
- ahk
- shp
- hne
- spp
- koi
- krj
- quf
- luz
- agr
- tsc
- mqy
- gof
- gbm
- miq
- dje
- awa
- bjj
- qvz
- sjp
- tll
- raj
- kjg
- bgz
- quy
- cbk
- akb
- oj
- ify
- mey
- ks
- cac
- brx
- qup
- syl
- jax
- ff
- ber
- tks
- trp
- mrw
- adh
- smt
- srr
- ffm
- qvc
- mtr
- ann
- kaa
- aa
- noe
- nut
- gyn
- kwi
- xmm
- msb
library_name: transformers
license: apache-2.0
pipeline_tag: translation
tags:
- text2text-generation
- text-generation-inference
- llama-cpp
- gguf-my-repo
widget:
- text: <2en> Como vai, amigo?
example_title: Translation to English
- text: <2de> Do you speak German?
example_title: Translation to German
---
# mtsdurica/madlad400-3b-mt-Q4_0-GGUF
This model was converted to GGUF format from [`google/madlad400-3b-mt`](https://huggingface.co/google/madlad400-3b-mt) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/madlad400-3b-mt) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo mtsdurica/madlad400-3b-mt-Q4_0-GGUF --hf-file madlad400-3b-mt-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo mtsdurica/madlad400-3b-mt-Q4_0-GGUF --hf-file madlad400-3b-mt-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo mtsdurica/madlad400-3b-mt-Q4_0-GGUF --hf-file madlad400-3b-mt-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo mtsdurica/madlad400-3b-mt-Q4_0-GGUF --hf-file madlad400-3b-mt-q4_0.gguf -c 2048
```
|
[
"TRANSLATION"
] |
Non_BioNLP
|
gokulsrinivasagan/bert_base_lda_100_stsb
|
gokulsrinivasagan
|
text-classification
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/bert_base_lda_100",
"base_model:finetune:gokulsrinivasagan/bert_base_lda_100",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,732,286,073,000 | 2024-11-22T14:36:23 | 5 | 0 |
---
base_model: gokulsrinivasagan/bert_base_lda_100
datasets:
- glue
language:
- en
library_name: transformers
metrics:
- spearmanr
tags:
- generated_from_trainer
model-index:
- name: bert_base_lda_100_stsb
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- type: spearmanr
value: .nan
name: Spearmanr
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_lda_100_stsb
This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_100](https://huggingface.co/gokulsrinivasagan/bert_base_lda_100) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3354
- Pearson: nan
- Spearmanr: nan
- Combined Score: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 6.0379 | 1.0 | 23 | 2.8532 | nan | nan | nan |
| 2.286 | 2.0 | 46 | 2.6158 | nan | nan | nan |
| 2.1985 | 3.0 | 69 | 2.3354 | nan | nan | nan |
| 2.1934 | 4.0 | 92 | 2.4655 | nan | nan | nan |
| 2.1771 | 5.0 | 115 | 2.5613 | nan | nan | nan |
| 2.1903 | 6.0 | 138 | 2.3448 | nan | nan | nan |
| 2.2164 | 7.0 | 161 | 3.0915 | nan | nan | nan |
| 2.2509 | 8.0 | 184 | 2.3759 | nan | nan | nan |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
SEBIS/code_trans_t5_base_code_documentation_generation_go
|
SEBIS
|
summarization
|
[
"transformers",
"pytorch",
"jax",
"t5",
"feature-extraction",
"summarization",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2021-06-23T04:12:04 | 128 | 0 |
---
tags:
- summarization
widget:
- text: func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot
&& pr . Match >= pr . PendingSnapshot }
---
# CodeTrans model for code documentation generation go
Pretrained model on programming language go using the t5 base model architecture. It was first released in
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized go code functions: it works best with tokenized go functions.
## Model description
This CodeTrans model is based on the `t5-base` model. It has its own SentencePiece vocabulary model. It used single-task training on CodeSearchNet Corpus go dataset.
## Intended uses & limitations
The model could be used to generate the description for the go function or be fine-tuned on other go code tasks. It can be used on unparsed and untokenized go code. However, if the go code is tokenized, the performance should be better.
### How to use
Here is how to use this model to generate go function documentation using Transformers SummarizationPipeline:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_go"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_documentation_generation_go", skip_special_tokens=True),
device=0
)
tokenized_code = "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }"
pipeline([tokenized_code])
```
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/single%20task/function%20documentation%20generation/go/base_model.ipynb).
## Training data
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
## Evaluation results
For the code documentation tasks, different models achieves the following results on different programming languages (in BLEU score):
Test results :
| Language / Model | Python | Java | Go | Php | Ruby | JavaScript |
| -------------------- | :------------: | :------------: | :------------: | :------------: | :------------: | :------------: |
| CodeTrans-ST-Small | 17.31 | 16.65 | 16.89 | 23.05 | 9.19 | 13.7 |
| CodeTrans-ST-Base | 16.86 | 17.17 | 17.16 | 22.98 | 8.23 | 13.17 |
| CodeTrans-TF-Small | 19.93 | 19.48 | 18.88 | 25.35 | 13.15 | 17.23 |
| CodeTrans-TF-Base | 20.26 | 20.19 | 19.50 | 25.84 | 14.07 | 18.25 |
| CodeTrans-TF-Large | 20.35 | 20.06 | **19.54** | 26.18 | 14.94 | **18.98** |
| CodeTrans-MT-Small | 19.64 | 19.00 | 19.15 | 24.68 | 14.91 | 15.26 |
| CodeTrans-MT-Base | **20.39** | 21.22 | 19.43 | **26.23** | **15.26** | 16.11 |
| CodeTrans-MT-Large | 20.18 | **21.87** | 19.38 | 26.08 | 15.00 | 16.23 |
| CodeTrans-MT-TF-Small | 19.77 | 20.04 | 19.36 | 25.55 | 13.70 | 17.24 |
| CodeTrans-MT-TF-Base | 19.77 | 21.12 | 18.86 | 25.79 | 14.24 | 18.62 |
| CodeTrans-MT-TF-Large | 18.94 | 21.42 | 18.77 | 26.20 | 14.19 | 18.83 |
| State of the art | 19.06 | 17.65 | 18.07 | 25.16 | 12.16 | 14.90 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
TransferGraph/YeRyeongLee_electra-base-discriminator-finetuned-filtered-0602-finetuned-lora-tweet_eval_irony
|
TransferGraph
|
text-classification
|
[
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602",
"base_model:adapter:YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602",
"license:apache-2.0",
"model-index",
"region:us"
] | 1,709,055,185,000 | 2024-02-29T13:38:35 | 0 | 0 |
---
base_model: YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602
datasets:
- tweet_eval
library_name: peft
license: apache-2.0
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: YeRyeongLee_electra-base-discriminator-finetuned-filtered-0602-finetuned-lora-tweet_eval_irony
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: irony
split: validation
args: irony
metrics:
- type: accuracy
value: 0.47643979057591623
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# YeRyeongLee_electra-base-discriminator-finetuned-filtered-0602-finetuned-lora-tweet_eval_irony
This model is a fine-tuned version of [YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602](https://huggingface.co/YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.4764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.5246 | None | 0 |
| 0.5257 | 0.7225 | 0 |
| 0.4743 | 0.7059 | 1 |
| 0.4743 | 0.6978 | 2 |
| 0.4775 | 0.6971 | 3 |
| 0.4764 | 0.6953 | 4 |
| 0.4764 | 0.6959 | 5 |
| 0.4764 | 0.6963 | 6 |
| 0.4764 | 0.6956 | 7 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
Alibaba-NLP/gte-Qwen2-7B-instruct
|
Alibaba-NLP
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"qwen2",
"text-generation",
"mteb",
"transformers",
"Qwen2",
"sentence-similarity",
"custom_code",
"arxiv:2308.03281",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,718,450,661,000 | 2025-01-11T08:10:51 | 110,385 | 348 |
---
license: apache-2.0
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
model-index:
- name: gte-qwen2-7B-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 91.31343283582089
- type: ap
value: 67.64251402604096
- type: f1
value: 87.53372530755692
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 97.497825
- type: ap
value: 96.30329547047529
- type: f1
value: 97.49769793778039
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 62.564
- type: f1
value: 60.975777935041066
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 36.486000000000004
- type: map_at_10
value: 54.842
- type: map_at_100
value: 55.206999999999994
- type: map_at_1000
value: 55.206999999999994
- type: map_at_3
value: 49.893
- type: map_at_5
value: 53.105000000000004
- type: mrr_at_1
value: 37.34
- type: mrr_at_10
value: 55.143
- type: mrr_at_100
value: 55.509
- type: mrr_at_1000
value: 55.509
- type: mrr_at_3
value: 50.212999999999994
- type: mrr_at_5
value: 53.432
- type: ndcg_at_1
value: 36.486000000000004
- type: ndcg_at_10
value: 64.273
- type: ndcg_at_100
value: 65.66199999999999
- type: ndcg_at_1000
value: 65.66199999999999
- type: ndcg_at_3
value: 54.352999999999994
- type: ndcg_at_5
value: 60.131
- type: precision_at_1
value: 36.486000000000004
- type: precision_at_10
value: 9.395000000000001
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.428
- type: precision_at_5
value: 16.259
- type: recall_at_1
value: 36.486000000000004
- type: recall_at_10
value: 93.95400000000001
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 67.283
- type: recall_at_5
value: 81.294
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 56.461169803700564
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 51.73600434466286
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.57827065898053
- type: mrr
value: 79.08136569493911
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 83.53324575999243
- type: cos_sim_spearman
value: 81.37173362822374
- type: euclidean_pearson
value: 82.19243335103444
- type: euclidean_spearman
value: 81.33679307304334
- type: manhattan_pearson
value: 82.38752665975699
- type: manhattan_spearman
value: 81.31510583189689
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.56818181818181
- type: f1
value: 87.25826722019875
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 50.09239610327673
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 46.64733054606282
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 33.997
- type: map_at_10
value: 48.176
- type: map_at_100
value: 49.82
- type: map_at_1000
value: 49.924
- type: map_at_3
value: 43.626
- type: map_at_5
value: 46.275
- type: mrr_at_1
value: 42.059999999999995
- type: mrr_at_10
value: 53.726
- type: mrr_at_100
value: 54.398
- type: mrr_at_1000
value: 54.416
- type: mrr_at_3
value: 50.714999999999996
- type: mrr_at_5
value: 52.639
- type: ndcg_at_1
value: 42.059999999999995
- type: ndcg_at_10
value: 55.574999999999996
- type: ndcg_at_100
value: 60.744
- type: ndcg_at_1000
value: 61.85699999999999
- type: ndcg_at_3
value: 49.363
- type: ndcg_at_5
value: 52.44
- type: precision_at_1
value: 42.059999999999995
- type: precision_at_10
value: 11.101999999999999
- type: precision_at_100
value: 1.73
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 24.464
- type: precision_at_5
value: 18.026
- type: recall_at_1
value: 33.997
- type: recall_at_10
value: 70.35900000000001
- type: recall_at_100
value: 91.642
- type: recall_at_1000
value: 97.977
- type: recall_at_3
value: 52.76
- type: recall_at_5
value: 61.148
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 35.884
- type: map_at_10
value: 48.14
- type: map_at_100
value: 49.5
- type: map_at_1000
value: 49.63
- type: map_at_3
value: 44.646
- type: map_at_5
value: 46.617999999999995
- type: mrr_at_1
value: 44.458999999999996
- type: mrr_at_10
value: 53.751000000000005
- type: mrr_at_100
value: 54.37800000000001
- type: mrr_at_1000
value: 54.415
- type: mrr_at_3
value: 51.815
- type: mrr_at_5
value: 52.882
- type: ndcg_at_1
value: 44.458999999999996
- type: ndcg_at_10
value: 54.157
- type: ndcg_at_100
value: 58.362
- type: ndcg_at_1000
value: 60.178
- type: ndcg_at_3
value: 49.661
- type: ndcg_at_5
value: 51.74999999999999
- type: precision_at_1
value: 44.458999999999996
- type: precision_at_10
value: 10.248
- type: precision_at_100
value: 1.5890000000000002
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 23.928
- type: precision_at_5
value: 16.878999999999998
- type: recall_at_1
value: 35.884
- type: recall_at_10
value: 64.798
- type: recall_at_100
value: 82.345
- type: recall_at_1000
value: 93.267
- type: recall_at_3
value: 51.847
- type: recall_at_5
value: 57.601
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 39.383
- type: map_at_10
value: 53.714
- type: map_at_100
value: 54.838
- type: map_at_1000
value: 54.87800000000001
- type: map_at_3
value: 50.114999999999995
- type: map_at_5
value: 52.153000000000006
- type: mrr_at_1
value: 45.016
- type: mrr_at_10
value: 56.732000000000006
- type: mrr_at_100
value: 57.411
- type: mrr_at_1000
value: 57.431
- type: mrr_at_3
value: 54.044000000000004
- type: mrr_at_5
value: 55.639
- type: ndcg_at_1
value: 45.016
- type: ndcg_at_10
value: 60.228
- type: ndcg_at_100
value: 64.277
- type: ndcg_at_1000
value: 65.07
- type: ndcg_at_3
value: 54.124
- type: ndcg_at_5
value: 57.147000000000006
- type: precision_at_1
value: 45.016
- type: precision_at_10
value: 9.937
- type: precision_at_100
value: 1.288
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.471999999999998
- type: precision_at_5
value: 16.991
- type: recall_at_1
value: 39.383
- type: recall_at_10
value: 76.175
- type: recall_at_100
value: 93.02
- type: recall_at_1000
value: 98.60900000000001
- type: recall_at_3
value: 60.265
- type: recall_at_5
value: 67.46600000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 27.426000000000002
- type: map_at_10
value: 37.397000000000006
- type: map_at_100
value: 38.61
- type: map_at_1000
value: 38.678000000000004
- type: map_at_3
value: 34.150999999999996
- type: map_at_5
value: 36.137
- type: mrr_at_1
value: 29.944
- type: mrr_at_10
value: 39.654
- type: mrr_at_100
value: 40.638000000000005
- type: mrr_at_1000
value: 40.691
- type: mrr_at_3
value: 36.817
- type: mrr_at_5
value: 38.524
- type: ndcg_at_1
value: 29.944
- type: ndcg_at_10
value: 43.094
- type: ndcg_at_100
value: 48.789
- type: ndcg_at_1000
value: 50.339999999999996
- type: ndcg_at_3
value: 36.984
- type: ndcg_at_5
value: 40.248
- type: precision_at_1
value: 29.944
- type: precision_at_10
value: 6.78
- type: precision_at_100
value: 1.024
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 15.895000000000001
- type: precision_at_5
value: 11.39
- type: recall_at_1
value: 27.426000000000002
- type: recall_at_10
value: 58.464000000000006
- type: recall_at_100
value: 84.193
- type: recall_at_1000
value: 95.52000000000001
- type: recall_at_3
value: 42.172
- type: recall_at_5
value: 50.101
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 19.721
- type: map_at_10
value: 31.604
- type: map_at_100
value: 32.972
- type: map_at_1000
value: 33.077
- type: map_at_3
value: 27.218999999999998
- type: map_at_5
value: 29.53
- type: mrr_at_1
value: 25.0
- type: mrr_at_10
value: 35.843
- type: mrr_at_100
value: 36.785000000000004
- type: mrr_at_1000
value: 36.842000000000006
- type: mrr_at_3
value: 32.193
- type: mrr_at_5
value: 34.264
- type: ndcg_at_1
value: 25.0
- type: ndcg_at_10
value: 38.606
- type: ndcg_at_100
value: 44.272
- type: ndcg_at_1000
value: 46.527
- type: ndcg_at_3
value: 30.985000000000003
- type: ndcg_at_5
value: 34.43
- type: precision_at_1
value: 25.0
- type: precision_at_10
value: 7.811
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.15
- type: precision_at_3
value: 15.423
- type: precision_at_5
value: 11.791
- type: recall_at_1
value: 19.721
- type: recall_at_10
value: 55.625
- type: recall_at_100
value: 79.34400000000001
- type: recall_at_1000
value: 95.208
- type: recall_at_3
value: 35.19
- type: recall_at_5
value: 43.626
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 33.784
- type: map_at_10
value: 47.522
- type: map_at_100
value: 48.949999999999996
- type: map_at_1000
value: 49.038
- type: map_at_3
value: 43.284
- type: map_at_5
value: 45.629
- type: mrr_at_1
value: 41.482
- type: mrr_at_10
value: 52.830999999999996
- type: mrr_at_100
value: 53.559999999999995
- type: mrr_at_1000
value: 53.588
- type: mrr_at_3
value: 50.016000000000005
- type: mrr_at_5
value: 51.614000000000004
- type: ndcg_at_1
value: 41.482
- type: ndcg_at_10
value: 54.569
- type: ndcg_at_100
value: 59.675999999999995
- type: ndcg_at_1000
value: 60.989000000000004
- type: ndcg_at_3
value: 48.187000000000005
- type: ndcg_at_5
value: 51.183
- type: precision_at_1
value: 41.482
- type: precision_at_10
value: 10.221
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.17500000000000002
- type: precision_at_3
value: 23.548
- type: precision_at_5
value: 16.805
- type: recall_at_1
value: 33.784
- type: recall_at_10
value: 69.798
- type: recall_at_100
value: 90.098
- type: recall_at_1000
value: 98.176
- type: recall_at_3
value: 52.127
- type: recall_at_5
value: 59.861
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 28.038999999999998
- type: map_at_10
value: 41.904
- type: map_at_100
value: 43.36
- type: map_at_1000
value: 43.453
- type: map_at_3
value: 37.785999999999994
- type: map_at_5
value: 40.105000000000004
- type: mrr_at_1
value: 35.046
- type: mrr_at_10
value: 46.926
- type: mrr_at_100
value: 47.815000000000005
- type: mrr_at_1000
value: 47.849000000000004
- type: mrr_at_3
value: 44.273
- type: mrr_at_5
value: 45.774
- type: ndcg_at_1
value: 35.046
- type: ndcg_at_10
value: 48.937000000000005
- type: ndcg_at_100
value: 54.544000000000004
- type: ndcg_at_1000
value: 56.069
- type: ndcg_at_3
value: 42.858000000000004
- type: ndcg_at_5
value: 45.644
- type: precision_at_1
value: 35.046
- type: precision_at_10
value: 9.452
- type: precision_at_100
value: 1.429
- type: precision_at_1000
value: 0.173
- type: precision_at_3
value: 21.346999999999998
- type: precision_at_5
value: 15.342
- type: recall_at_1
value: 28.038999999999998
- type: recall_at_10
value: 64.59700000000001
- type: recall_at_100
value: 87.735
- type: recall_at_1000
value: 97.41300000000001
- type: recall_at_3
value: 47.368
- type: recall_at_5
value: 54.93900000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 28.17291666666667
- type: map_at_10
value: 40.025749999999995
- type: map_at_100
value: 41.39208333333333
- type: map_at_1000
value: 41.499249999999996
- type: map_at_3
value: 36.347
- type: map_at_5
value: 38.41391666666667
- type: mrr_at_1
value: 33.65925
- type: mrr_at_10
value: 44.085499999999996
- type: mrr_at_100
value: 44.94116666666667
- type: mrr_at_1000
value: 44.9855
- type: mrr_at_3
value: 41.2815
- type: mrr_at_5
value: 42.91491666666666
- type: ndcg_at_1
value: 33.65925
- type: ndcg_at_10
value: 46.430833333333325
- type: ndcg_at_100
value: 51.761
- type: ndcg_at_1000
value: 53.50899999999999
- type: ndcg_at_3
value: 40.45133333333333
- type: ndcg_at_5
value: 43.31483333333334
- type: precision_at_1
value: 33.65925
- type: precision_at_10
value: 8.4995
- type: precision_at_100
value: 1.3210000000000004
- type: precision_at_1000
value: 0.16591666666666666
- type: precision_at_3
value: 19.165083333333335
- type: precision_at_5
value: 13.81816666666667
- type: recall_at_1
value: 28.17291666666667
- type: recall_at_10
value: 61.12624999999999
- type: recall_at_100
value: 83.97266666666667
- type: recall_at_1000
value: 95.66550000000001
- type: recall_at_3
value: 44.661249999999995
- type: recall_at_5
value: 51.983333333333334
- type: map_at_1
value: 17.936
- type: map_at_10
value: 27.399
- type: map_at_100
value: 28.632
- type: map_at_1000
value: 28.738000000000003
- type: map_at_3
value: 24.456
- type: map_at_5
value: 26.06
- type: mrr_at_1
value: 19.224
- type: mrr_at_10
value: 28.998
- type: mrr_at_100
value: 30.11
- type: mrr_at_1000
value: 30.177
- type: mrr_at_3
value: 26.247999999999998
- type: mrr_at_5
value: 27.708
- type: ndcg_at_1
value: 19.224
- type: ndcg_at_10
value: 32.911
- type: ndcg_at_100
value: 38.873999999999995
- type: ndcg_at_1000
value: 41.277
- type: ndcg_at_3
value: 27.142
- type: ndcg_at_5
value: 29.755
- type: precision_at_1
value: 19.224
- type: precision_at_10
value: 5.6930000000000005
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 12.138
- type: precision_at_5
value: 8.909
- type: recall_at_1
value: 17.936
- type: recall_at_10
value: 48.096
- type: recall_at_100
value: 75.389
- type: recall_at_1000
value: 92.803
- type: recall_at_3
value: 32.812999999999995
- type: recall_at_5
value: 38.851
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 24.681
- type: map_at_10
value: 34.892
- type: map_at_100
value: 35.996
- type: map_at_1000
value: 36.083
- type: map_at_3
value: 31.491999999999997
- type: map_at_5
value: 33.632
- type: mrr_at_1
value: 28.528
- type: mrr_at_10
value: 37.694
- type: mrr_at_100
value: 38.613
- type: mrr_at_1000
value: 38.668
- type: mrr_at_3
value: 34.714
- type: mrr_at_5
value: 36.616
- type: ndcg_at_1
value: 28.528
- type: ndcg_at_10
value: 40.703
- type: ndcg_at_100
value: 45.993
- type: ndcg_at_1000
value: 47.847
- type: ndcg_at_3
value: 34.622
- type: ndcg_at_5
value: 38.035999999999994
- type: precision_at_1
value: 28.528
- type: precision_at_10
value: 6.902
- type: precision_at_100
value: 1.0370000000000001
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 15.798000000000002
- type: precision_at_5
value: 11.655999999999999
- type: recall_at_1
value: 24.681
- type: recall_at_10
value: 55.81
- type: recall_at_100
value: 79.785
- type: recall_at_1000
value: 92.959
- type: recall_at_3
value: 39.074
- type: recall_at_5
value: 47.568
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 18.627
- type: map_at_10
value: 27.872000000000003
- type: map_at_100
value: 29.237999999999996
- type: map_at_1000
value: 29.363
- type: map_at_3
value: 24.751
- type: map_at_5
value: 26.521
- type: mrr_at_1
value: 23.021
- type: mrr_at_10
value: 31.924000000000003
- type: mrr_at_100
value: 32.922000000000004
- type: mrr_at_1000
value: 32.988
- type: mrr_at_3
value: 29.192
- type: mrr_at_5
value: 30.798
- type: ndcg_at_1
value: 23.021
- type: ndcg_at_10
value: 33.535
- type: ndcg_at_100
value: 39.732
- type: ndcg_at_1000
value: 42.201
- type: ndcg_at_3
value: 28.153
- type: ndcg_at_5
value: 30.746000000000002
- type: precision_at_1
value: 23.021
- type: precision_at_10
value: 6.459
- type: precision_at_100
value: 1.1320000000000001
- type: precision_at_1000
value: 0.153
- type: precision_at_3
value: 13.719000000000001
- type: precision_at_5
value: 10.193000000000001
- type: recall_at_1
value: 18.627
- type: recall_at_10
value: 46.463
- type: recall_at_100
value: 74.226
- type: recall_at_1000
value: 91.28500000000001
- type: recall_at_3
value: 31.357000000000003
- type: recall_at_5
value: 38.067
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 31.457
- type: map_at_10
value: 42.888
- type: map_at_100
value: 44.24
- type: map_at_1000
value: 44.327
- type: map_at_3
value: 39.588
- type: map_at_5
value: 41.423
- type: mrr_at_1
value: 37.126999999999995
- type: mrr_at_10
value: 47.083000000000006
- type: mrr_at_100
value: 47.997
- type: mrr_at_1000
value: 48.044
- type: mrr_at_3
value: 44.574000000000005
- type: mrr_at_5
value: 46.202
- type: ndcg_at_1
value: 37.126999999999995
- type: ndcg_at_10
value: 48.833
- type: ndcg_at_100
value: 54.327000000000005
- type: ndcg_at_1000
value: 56.011
- type: ndcg_at_3
value: 43.541999999999994
- type: ndcg_at_5
value: 46.127
- type: precision_at_1
value: 37.126999999999995
- type: precision_at_10
value: 8.376999999999999
- type: precision_at_100
value: 1.2309999999999999
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 20.211000000000002
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 31.457
- type: recall_at_10
value: 62.369
- type: recall_at_100
value: 85.444
- type: recall_at_1000
value: 96.65599999999999
- type: recall_at_3
value: 47.961
- type: recall_at_5
value: 54.676
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 27.139999999999997
- type: map_at_10
value: 38.801
- type: map_at_100
value: 40.549
- type: map_at_1000
value: 40.802
- type: map_at_3
value: 35.05
- type: map_at_5
value: 36.884
- type: mrr_at_1
value: 33.004
- type: mrr_at_10
value: 43.864
- type: mrr_at_100
value: 44.667
- type: mrr_at_1000
value: 44.717
- type: mrr_at_3
value: 40.777
- type: mrr_at_5
value: 42.319
- type: ndcg_at_1
value: 33.004
- type: ndcg_at_10
value: 46.022
- type: ndcg_at_100
value: 51.542
- type: ndcg_at_1000
value: 53.742000000000004
- type: ndcg_at_3
value: 39.795
- type: ndcg_at_5
value: 42.272
- type: precision_at_1
value: 33.004
- type: precision_at_10
value: 9.012
- type: precision_at_100
value: 1.7770000000000001
- type: precision_at_1000
value: 0.26
- type: precision_at_3
value: 19.038
- type: precision_at_5
value: 13.675999999999998
- type: recall_at_1
value: 27.139999999999997
- type: recall_at_10
value: 60.961
- type: recall_at_100
value: 84.451
- type: recall_at_1000
value: 98.113
- type: recall_at_3
value: 43.001
- type: recall_at_5
value: 49.896
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 22.076999999999998
- type: map_at_10
value: 35.44
- type: map_at_100
value: 37.651
- type: map_at_1000
value: 37.824999999999996
- type: map_at_3
value: 30.764999999999997
- type: map_at_5
value: 33.26
- type: mrr_at_1
value: 50.163000000000004
- type: mrr_at_10
value: 61.207
- type: mrr_at_100
value: 61.675000000000004
- type: mrr_at_1000
value: 61.692
- type: mrr_at_3
value: 58.60999999999999
- type: mrr_at_5
value: 60.307
- type: ndcg_at_1
value: 50.163000000000004
- type: ndcg_at_10
value: 45.882
- type: ndcg_at_100
value: 53.239999999999995
- type: ndcg_at_1000
value: 55.852000000000004
- type: ndcg_at_3
value: 40.514
- type: ndcg_at_5
value: 42.038
- type: precision_at_1
value: 50.163000000000004
- type: precision_at_10
value: 13.466000000000001
- type: precision_at_100
value: 2.164
- type: precision_at_1000
value: 0.266
- type: precision_at_3
value: 29.707
- type: precision_at_5
value: 21.694
- type: recall_at_1
value: 22.076999999999998
- type: recall_at_10
value: 50.193
- type: recall_at_100
value: 74.993
- type: recall_at_1000
value: 89.131
- type: recall_at_3
value: 35.472
- type: recall_at_5
value: 41.814
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 9.953
- type: map_at_10
value: 24.515
- type: map_at_100
value: 36.173
- type: map_at_1000
value: 38.351
- type: map_at_3
value: 16.592000000000002
- type: map_at_5
value: 20.036
- type: mrr_at_1
value: 74.25
- type: mrr_at_10
value: 81.813
- type: mrr_at_100
value: 82.006
- type: mrr_at_1000
value: 82.011
- type: mrr_at_3
value: 80.875
- type: mrr_at_5
value: 81.362
- type: ndcg_at_1
value: 62.5
- type: ndcg_at_10
value: 52.42
- type: ndcg_at_100
value: 56.808
- type: ndcg_at_1000
value: 63.532999999999994
- type: ndcg_at_3
value: 56.654
- type: ndcg_at_5
value: 54.18300000000001
- type: precision_at_1
value: 74.25
- type: precision_at_10
value: 42.699999999999996
- type: precision_at_100
value: 13.675
- type: precision_at_1000
value: 2.664
- type: precision_at_3
value: 60.5
- type: precision_at_5
value: 52.800000000000004
- type: recall_at_1
value: 9.953
- type: recall_at_10
value: 30.253999999999998
- type: recall_at_100
value: 62.516000000000005
- type: recall_at_1000
value: 84.163
- type: recall_at_3
value: 18.13
- type: recall_at_5
value: 22.771
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 79.455
- type: f1
value: 74.16798697647569
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 87.531
- type: map_at_10
value: 93.16799999999999
- type: map_at_100
value: 93.341
- type: map_at_1000
value: 93.349
- type: map_at_3
value: 92.444
- type: map_at_5
value: 92.865
- type: mrr_at_1
value: 94.014
- type: mrr_at_10
value: 96.761
- type: mrr_at_100
value: 96.762
- type: mrr_at_1000
value: 96.762
- type: mrr_at_3
value: 96.672
- type: mrr_at_5
value: 96.736
- type: ndcg_at_1
value: 94.014
- type: ndcg_at_10
value: 95.112
- type: ndcg_at_100
value: 95.578
- type: ndcg_at_1000
value: 95.68900000000001
- type: ndcg_at_3
value: 94.392
- type: ndcg_at_5
value: 94.72500000000001
- type: precision_at_1
value: 94.014
- type: precision_at_10
value: 11.065
- type: precision_at_100
value: 1.157
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 35.259
- type: precision_at_5
value: 21.599
- type: recall_at_1
value: 87.531
- type: recall_at_10
value: 97.356
- type: recall_at_100
value: 98.965
- type: recall_at_1000
value: 99.607
- type: recall_at_3
value: 95.312
- type: recall_at_5
value: 96.295
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 32.055
- type: map_at_10
value: 53.114
- type: map_at_100
value: 55.235
- type: map_at_1000
value: 55.345
- type: map_at_3
value: 45.854
- type: map_at_5
value: 50.025
- type: mrr_at_1
value: 60.34
- type: mrr_at_10
value: 68.804
- type: mrr_at_100
value: 69.309
- type: mrr_at_1000
value: 69.32199999999999
- type: mrr_at_3
value: 66.40899999999999
- type: mrr_at_5
value: 67.976
- type: ndcg_at_1
value: 60.34
- type: ndcg_at_10
value: 62.031000000000006
- type: ndcg_at_100
value: 68.00500000000001
- type: ndcg_at_1000
value: 69.286
- type: ndcg_at_3
value: 56.355999999999995
- type: ndcg_at_5
value: 58.687
- type: precision_at_1
value: 60.34
- type: precision_at_10
value: 17.176
- type: precision_at_100
value: 2.36
- type: precision_at_1000
value: 0.259
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 27.809
- type: recall_at_1
value: 32.055
- type: recall_at_10
value: 70.91
- type: recall_at_100
value: 91.83
- type: recall_at_1000
value: 98.871
- type: recall_at_3
value: 51.202999999999996
- type: recall_at_5
value: 60.563
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 43.68
- type: map_at_10
value: 64.389
- type: map_at_100
value: 65.24
- type: map_at_1000
value: 65.303
- type: map_at_3
value: 61.309000000000005
- type: map_at_5
value: 63.275999999999996
- type: mrr_at_1
value: 87.36
- type: mrr_at_10
value: 91.12
- type: mrr_at_100
value: 91.227
- type: mrr_at_1000
value: 91.229
- type: mrr_at_3
value: 90.57600000000001
- type: mrr_at_5
value: 90.912
- type: ndcg_at_1
value: 87.36
- type: ndcg_at_10
value: 73.076
- type: ndcg_at_100
value: 75.895
- type: ndcg_at_1000
value: 77.049
- type: ndcg_at_3
value: 68.929
- type: ndcg_at_5
value: 71.28
- type: precision_at_1
value: 87.36
- type: precision_at_10
value: 14.741000000000001
- type: precision_at_100
value: 1.694
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 43.043
- type: precision_at_5
value: 27.681
- type: recall_at_1
value: 43.68
- type: recall_at_10
value: 73.707
- type: recall_at_100
value: 84.7
- type: recall_at_1000
value: 92.309
- type: recall_at_3
value: 64.564
- type: recall_at_5
value: 69.203
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 96.75399999999999
- type: ap
value: 95.29389839242187
- type: f1
value: 96.75348377433475
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 25.176
- type: map_at_10
value: 38.598
- type: map_at_100
value: 39.707
- type: map_at_1000
value: 39.744
- type: map_at_3
value: 34.566
- type: map_at_5
value: 36.863
- type: mrr_at_1
value: 25.874000000000002
- type: mrr_at_10
value: 39.214
- type: mrr_at_100
value: 40.251
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 35.291
- type: mrr_at_5
value: 37.545
- type: ndcg_at_1
value: 25.874000000000002
- type: ndcg_at_10
value: 45.98
- type: ndcg_at_100
value: 51.197
- type: ndcg_at_1000
value: 52.073
- type: ndcg_at_3
value: 37.785999999999994
- type: ndcg_at_5
value: 41.870000000000005
- type: precision_at_1
value: 25.874000000000002
- type: precision_at_10
value: 7.181
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 16.051000000000002
- type: precision_at_5
value: 11.713
- type: recall_at_1
value: 25.176
- type: recall_at_10
value: 68.67699999999999
- type: recall_at_100
value: 92.55
- type: recall_at_1000
value: 99.164
- type: recall_at_3
value: 46.372
- type: recall_at_5
value: 56.16
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 99.03784769721841
- type: f1
value: 98.97791641821495
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 91.88326493388054
- type: f1
value: 73.74809928034335
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 85.41358439811701
- type: f1
value: 83.503679460639
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 89.77135171486215
- type: f1
value: 88.89843747468366
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 46.22695362087359
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 44.132372165849425
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.35680810650402
- type: mrr
value: 34.72625715637218
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 7.165000000000001
- type: map_at_10
value: 15.424
- type: map_at_100
value: 20.28
- type: map_at_1000
value: 22.065
- type: map_at_3
value: 11.236
- type: map_at_5
value: 13.025999999999998
- type: mrr_at_1
value: 51.702999999999996
- type: mrr_at_10
value: 59.965
- type: mrr_at_100
value: 60.667
- type: mrr_at_1000
value: 60.702999999999996
- type: mrr_at_3
value: 58.772000000000006
- type: mrr_at_5
value: 59.267
- type: ndcg_at_1
value: 49.536
- type: ndcg_at_10
value: 40.6
- type: ndcg_at_100
value: 37.848
- type: ndcg_at_1000
value: 46.657
- type: ndcg_at_3
value: 46.117999999999995
- type: ndcg_at_5
value: 43.619
- type: precision_at_1
value: 51.393
- type: precision_at_10
value: 30.31
- type: precision_at_100
value: 9.972
- type: precision_at_1000
value: 2.329
- type: precision_at_3
value: 43.137
- type: precision_at_5
value: 37.585
- type: recall_at_1
value: 7.165000000000001
- type: recall_at_10
value: 19.689999999999998
- type: recall_at_100
value: 39.237
- type: recall_at_1000
value: 71.417
- type: recall_at_3
value: 12.247
- type: recall_at_5
value: 14.902999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 42.653999999999996
- type: map_at_10
value: 59.611999999999995
- type: map_at_100
value: 60.32300000000001
- type: map_at_1000
value: 60.336
- type: map_at_3
value: 55.584999999999994
- type: map_at_5
value: 58.19
- type: mrr_at_1
value: 47.683
- type: mrr_at_10
value: 62.06700000000001
- type: mrr_at_100
value: 62.537
- type: mrr_at_1000
value: 62.544999999999995
- type: mrr_at_3
value: 59.178
- type: mrr_at_5
value: 61.034
- type: ndcg_at_1
value: 47.654
- type: ndcg_at_10
value: 67.001
- type: ndcg_at_100
value: 69.73899999999999
- type: ndcg_at_1000
value: 69.986
- type: ndcg_at_3
value: 59.95700000000001
- type: ndcg_at_5
value: 64.025
- type: precision_at_1
value: 47.654
- type: precision_at_10
value: 10.367999999999999
- type: precision_at_100
value: 1.192
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 26.651000000000003
- type: precision_at_5
value: 18.459
- type: recall_at_1
value: 42.653999999999996
- type: recall_at_10
value: 86.619
- type: recall_at_100
value: 98.04899999999999
- type: recall_at_1000
value: 99.812
- type: recall_at_3
value: 68.987
- type: recall_at_5
value: 78.158
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 72.538
- type: map_at_10
value: 86.702
- type: map_at_100
value: 87.31
- type: map_at_1000
value: 87.323
- type: map_at_3
value: 83.87
- type: map_at_5
value: 85.682
- type: mrr_at_1
value: 83.31
- type: mrr_at_10
value: 89.225
- type: mrr_at_100
value: 89.30399999999999
- type: mrr_at_1000
value: 89.30399999999999
- type: mrr_at_3
value: 88.44300000000001
- type: mrr_at_5
value: 89.005
- type: ndcg_at_1
value: 83.32000000000001
- type: ndcg_at_10
value: 90.095
- type: ndcg_at_100
value: 91.12
- type: ndcg_at_1000
value: 91.179
- type: ndcg_at_3
value: 87.606
- type: ndcg_at_5
value: 89.031
- type: precision_at_1
value: 83.32000000000001
- type: precision_at_10
value: 13.641
- type: precision_at_100
value: 1.541
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.377
- type: precision_at_5
value: 25.162000000000003
- type: recall_at_1
value: 72.538
- type: recall_at_10
value: 96.47200000000001
- type: recall_at_100
value: 99.785
- type: recall_at_1000
value: 99.99900000000001
- type: recall_at_3
value: 89.278
- type: recall_at_5
value: 93.367
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 73.55219145406065
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 74.13437105242755
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.873
- type: map_at_10
value: 17.944
- type: map_at_100
value: 21.171
- type: map_at_1000
value: 21.528
- type: map_at_3
value: 12.415
- type: map_at_5
value: 15.187999999999999
- type: mrr_at_1
value: 33.800000000000004
- type: mrr_at_10
value: 46.455
- type: mrr_at_100
value: 47.378
- type: mrr_at_1000
value: 47.394999999999996
- type: mrr_at_3
value: 42.367
- type: mrr_at_5
value: 44.972
- type: ndcg_at_1
value: 33.800000000000004
- type: ndcg_at_10
value: 28.907
- type: ndcg_at_100
value: 39.695
- type: ndcg_at_1000
value: 44.582
- type: ndcg_at_3
value: 26.949
- type: ndcg_at_5
value: 23.988
- type: precision_at_1
value: 33.800000000000004
- type: precision_at_10
value: 15.079999999999998
- type: precision_at_100
value: 3.056
- type: precision_at_1000
value: 0.42100000000000004
- type: precision_at_3
value: 25.167
- type: precision_at_5
value: 21.26
- type: recall_at_1
value: 6.873
- type: recall_at_10
value: 30.568
- type: recall_at_100
value: 62.062
- type: recall_at_1000
value: 85.37700000000001
- type: recall_at_3
value: 15.312999999999999
- type: recall_at_5
value: 21.575
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.37009118256057
- type: cos_sim_spearman
value: 79.27986395671529
- type: euclidean_pearson
value: 79.18037715442115
- type: euclidean_spearman
value: 79.28004791561621
- type: manhattan_pearson
value: 79.34062972800541
- type: manhattan_spearman
value: 79.43106695543402
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 87.48474767383833
- type: cos_sim_spearman
value: 79.54505388752513
- type: euclidean_pearson
value: 83.43282704179565
- type: euclidean_spearman
value: 79.54579919925405
- type: manhattan_pearson
value: 83.77564492427952
- type: manhattan_spearman
value: 79.84558396989286
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 88.803698035802
- type: cos_sim_spearman
value: 88.83451367754881
- type: euclidean_pearson
value: 88.28939285711628
- type: euclidean_spearman
value: 88.83528996073112
- type: manhattan_pearson
value: 88.28017412671795
- type: manhattan_spearman
value: 88.9228828016344
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 85.27469288153428
- type: cos_sim_spearman
value: 83.87477064876288
- type: euclidean_pearson
value: 84.2601737035379
- type: euclidean_spearman
value: 83.87431082479074
- type: manhattan_pearson
value: 84.3621547772745
- type: manhattan_spearman
value: 84.12094375000423
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.12749863201587
- type: cos_sim_spearman
value: 88.54287568368565
- type: euclidean_pearson
value: 87.90429700607999
- type: euclidean_spearman
value: 88.5437689576261
- type: manhattan_pearson
value: 88.19276653356833
- type: manhattan_spearman
value: 88.99995393814679
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.68398747560902
- type: cos_sim_spearman
value: 86.48815303460574
- type: euclidean_pearson
value: 85.52356631237954
- type: euclidean_spearman
value: 86.486391949551
- type: manhattan_pearson
value: 85.67267981761788
- type: manhattan_spearman
value: 86.7073696332485
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.9057107443124
- type: cos_sim_spearman
value: 88.7312168757697
- type: euclidean_pearson
value: 88.72810439714794
- type: euclidean_spearman
value: 88.71976185854771
- type: manhattan_pearson
value: 88.50433745949111
- type: manhattan_spearman
value: 88.51726175544195
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 67.59391795109886
- type: cos_sim_spearman
value: 66.87613008631367
- type: euclidean_pearson
value: 69.23198488262217
- type: euclidean_spearman
value: 66.85427723013692
- type: manhattan_pearson
value: 69.50730124841084
- type: manhattan_spearman
value: 67.10404669820792
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.0820605344619
- type: cos_sim_spearman
value: 86.8518089863434
- type: euclidean_pearson
value: 86.31087134689284
- type: euclidean_spearman
value: 86.8518520517941
- type: manhattan_pearson
value: 86.47203796160612
- type: manhattan_spearman
value: 87.1080149734421
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 89.09255369305481
- type: mrr
value: 97.10323445617563
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 61.260999999999996
- type: map_at_10
value: 74.043
- type: map_at_100
value: 74.37700000000001
- type: map_at_1000
value: 74.384
- type: map_at_3
value: 71.222
- type: map_at_5
value: 72.875
- type: mrr_at_1
value: 64.333
- type: mrr_at_10
value: 74.984
- type: mrr_at_100
value: 75.247
- type: mrr_at_1000
value: 75.25500000000001
- type: mrr_at_3
value: 73.167
- type: mrr_at_5
value: 74.35000000000001
- type: ndcg_at_1
value: 64.333
- type: ndcg_at_10
value: 79.06
- type: ndcg_at_100
value: 80.416
- type: ndcg_at_1000
value: 80.55600000000001
- type: ndcg_at_3
value: 74.753
- type: ndcg_at_5
value: 76.97500000000001
- type: precision_at_1
value: 64.333
- type: precision_at_10
value: 10.567
- type: precision_at_100
value: 1.1199999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 29.889
- type: precision_at_5
value: 19.533
- type: recall_at_1
value: 61.260999999999996
- type: recall_at_10
value: 93.167
- type: recall_at_100
value: 99.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 81.667
- type: recall_at_5
value: 87.394
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.71980198019801
- type: cos_sim_ap
value: 92.81616007802704
- type: cos_sim_f1
value: 85.17548454688318
- type: cos_sim_precision
value: 89.43894389438944
- type: cos_sim_recall
value: 81.3
- type: dot_accuracy
value: 99.71980198019801
- type: dot_ap
value: 92.81398760591358
- type: dot_f1
value: 85.17548454688318
- type: dot_precision
value: 89.43894389438944
- type: dot_recall
value: 81.3
- type: euclidean_accuracy
value: 99.71980198019801
- type: euclidean_ap
value: 92.81560637245072
- type: euclidean_f1
value: 85.17548454688318
- type: euclidean_precision
value: 89.43894389438944
- type: euclidean_recall
value: 81.3
- type: manhattan_accuracy
value: 99.73069306930694
- type: manhattan_ap
value: 93.14005487480794
- type: manhattan_f1
value: 85.56263269639068
- type: manhattan_precision
value: 91.17647058823529
- type: manhattan_recall
value: 80.60000000000001
- type: max_accuracy
value: 99.73069306930694
- type: max_ap
value: 93.14005487480794
- type: max_f1
value: 85.56263269639068
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 79.86443362395185
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 49.40897096662564
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.66040806627947
- type: mrr
value: 56.58670475766064
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.51015090598575
- type: cos_sim_spearman
value: 31.35016454939226
- type: dot_pearson
value: 31.5150068731
- type: dot_spearman
value: 31.34790869023487
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.254
- type: map_at_10
value: 2.064
- type: map_at_100
value: 12.909
- type: map_at_1000
value: 31.761
- type: map_at_3
value: 0.738
- type: map_at_5
value: 1.155
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 98.0
- type: mrr_at_100
value: 98.0
- type: mrr_at_1000
value: 98.0
- type: mrr_at_3
value: 98.0
- type: mrr_at_5
value: 98.0
- type: ndcg_at_1
value: 93.0
- type: ndcg_at_10
value: 82.258
- type: ndcg_at_100
value: 64.34
- type: ndcg_at_1000
value: 57.912
- type: ndcg_at_3
value: 90.827
- type: ndcg_at_5
value: 86.79
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 84.8
- type: precision_at_100
value: 66.0
- type: precision_at_1000
value: 25.356
- type: precision_at_3
value: 94.667
- type: precision_at_5
value: 90.4
- type: recall_at_1
value: 0.254
- type: recall_at_10
value: 2.1950000000000003
- type: recall_at_100
value: 16.088
- type: recall_at_1000
value: 54.559000000000005
- type: recall_at_3
value: 0.75
- type: recall_at_5
value: 1.191
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.976
- type: map_at_10
value: 11.389000000000001
- type: map_at_100
value: 18.429000000000002
- type: map_at_1000
value: 20.113
- type: map_at_3
value: 6.483
- type: map_at_5
value: 8.770999999999999
- type: mrr_at_1
value: 40.816
- type: mrr_at_10
value: 58.118
- type: mrr_at_100
value: 58.489999999999995
- type: mrr_at_1000
value: 58.489999999999995
- type: mrr_at_3
value: 53.061
- type: mrr_at_5
value: 57.041
- type: ndcg_at_1
value: 40.816
- type: ndcg_at_10
value: 30.567
- type: ndcg_at_100
value: 42.44
- type: ndcg_at_1000
value: 53.480000000000004
- type: ndcg_at_3
value: 36.016
- type: ndcg_at_5
value: 34.257
- type: precision_at_1
value: 42.857
- type: precision_at_10
value: 25.714
- type: precision_at_100
value: 8.429
- type: precision_at_1000
value: 1.5939999999999999
- type: precision_at_3
value: 36.735
- type: precision_at_5
value: 33.878
- type: recall_at_1
value: 2.976
- type: recall_at_10
value: 17.854999999999997
- type: recall_at_100
value: 51.833
- type: recall_at_1000
value: 86.223
- type: recall_at_3
value: 7.887
- type: recall_at_5
value: 12.026
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 85.1174
- type: ap
value: 30.169441069345748
- type: f1
value: 69.79254701873245
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 72.58347481607245
- type: f1
value: 72.74877295564937
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 53.90586138221305
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.35769207844072
- type: cos_sim_ap
value: 77.9645072410354
- type: cos_sim_f1
value: 71.32352941176471
- type: cos_sim_precision
value: 66.5903890160183
- type: cos_sim_recall
value: 76.78100263852242
- type: dot_accuracy
value: 87.37557370209214
- type: dot_ap
value: 77.96250046429908
- type: dot_f1
value: 71.28932757557064
- type: dot_precision
value: 66.95249130938586
- type: dot_recall
value: 76.22691292875989
- type: euclidean_accuracy
value: 87.35173153722357
- type: euclidean_ap
value: 77.96520460741593
- type: euclidean_f1
value: 71.32470733210104
- type: euclidean_precision
value: 66.91329479768785
- type: euclidean_recall
value: 76.35883905013192
- type: manhattan_accuracy
value: 87.25636287774931
- type: manhattan_ap
value: 77.77752485611796
- type: manhattan_f1
value: 71.18148599269183
- type: manhattan_precision
value: 66.10859728506787
- type: manhattan_recall
value: 77.0976253298153
- type: max_accuracy
value: 87.37557370209214
- type: max_ap
value: 77.96520460741593
- type: max_f1
value: 71.32470733210104
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.38176737687739
- type: cos_sim_ap
value: 86.58811861657401
- type: cos_sim_f1
value: 79.09430644097604
- type: cos_sim_precision
value: 75.45085977911366
- type: cos_sim_recall
value: 83.10748383122882
- type: dot_accuracy
value: 89.38370784336554
- type: dot_ap
value: 86.58840606004333
- type: dot_f1
value: 79.10179860068133
- type: dot_precision
value: 75.44546153308643
- type: dot_recall
value: 83.13058207576223
- type: euclidean_accuracy
value: 89.38564830985369
- type: euclidean_ap
value: 86.58820721061164
- type: euclidean_f1
value: 79.09070942235888
- type: euclidean_precision
value: 75.38729937194697
- type: euclidean_recall
value: 83.17677856482906
- type: manhattan_accuracy
value: 89.40699344122326
- type: manhattan_ap
value: 86.60631843011362
- type: manhattan_f1
value: 79.14949970570925
- type: manhattan_precision
value: 75.78191039729502
- type: manhattan_recall
value: 82.83030489682784
- type: max_accuracy
value: 89.40699344122326
- type: max_ap
value: 86.60631843011362
- type: max_f1
value: 79.14949970570925
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 65.58442135663871
- type: cos_sim_spearman
value: 72.2538631361313
- type: euclidean_pearson
value: 70.97255486607429
- type: euclidean_spearman
value: 72.25374250228647
- type: manhattan_pearson
value: 70.83250199989911
- type: manhattan_spearman
value: 72.14819496536272
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 59.99478404929932
- type: cos_sim_spearman
value: 62.61836216999812
- type: euclidean_pearson
value: 66.86429811933593
- type: euclidean_spearman
value: 62.6183520374191
- type: manhattan_pearson
value: 66.8063778911633
- type: manhattan_spearman
value: 62.569607573241115
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.98400000000001
- type: f1
value: 51.21447361350723
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 79.11941660686553
- type: cos_sim_spearman
value: 81.25029594540435
- type: euclidean_pearson
value: 82.06973504238826
- type: euclidean_spearman
value: 81.2501989488524
- type: manhattan_pearson
value: 82.10094630392753
- type: manhattan_spearman
value: 81.27987244392389
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 47.07270168705156
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 45.98511703185043
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 88.19895157194931
- type: mrr
value: 90.21424603174603
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 88.03317320980119
- type: mrr
value: 89.9461507936508
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: map_at_1
value: 29.037000000000003
- type: map_at_10
value: 42.001
- type: map_at_100
value: 43.773
- type: map_at_1000
value: 43.878
- type: map_at_3
value: 37.637
- type: map_at_5
value: 40.034
- type: mrr_at_1
value: 43.136
- type: mrr_at_10
value: 51.158
- type: mrr_at_100
value: 52.083
- type: mrr_at_1000
value: 52.12
- type: mrr_at_3
value: 48.733
- type: mrr_at_5
value: 50.025
- type: ndcg_at_1
value: 43.136
- type: ndcg_at_10
value: 48.685
- type: ndcg_at_100
value: 55.513
- type: ndcg_at_1000
value: 57.242000000000004
- type: ndcg_at_3
value: 43.329
- type: ndcg_at_5
value: 45.438
- type: precision_at_1
value: 43.136
- type: precision_at_10
value: 10.56
- type: precision_at_100
value: 1.6129999999999998
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 24.064
- type: precision_at_5
value: 17.269000000000002
- type: recall_at_1
value: 29.037000000000003
- type: recall_at_10
value: 59.245000000000005
- type: recall_at_100
value: 87.355
- type: recall_at_1000
value: 98.74000000000001
- type: recall_at_3
value: 42.99
- type: recall_at_5
value: 49.681999999999995
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 82.68190018039687
- type: cos_sim_ap
value: 90.18017125327886
- type: cos_sim_f1
value: 83.64080906868193
- type: cos_sim_precision
value: 79.7076890489303
- type: cos_sim_recall
value: 87.98223053542202
- type: dot_accuracy
value: 82.68190018039687
- type: dot_ap
value: 90.18782350103646
- type: dot_f1
value: 83.64242087729039
- type: dot_precision
value: 79.65313028764805
- type: dot_recall
value: 88.05237315875614
- type: euclidean_accuracy
value: 82.68190018039687
- type: euclidean_ap
value: 90.1801957900632
- type: euclidean_f1
value: 83.63636363636364
- type: euclidean_precision
value: 79.52772506852203
- type: euclidean_recall
value: 88.19265840542437
- type: manhattan_accuracy
value: 82.14070956103427
- type: manhattan_ap
value: 89.96178420101427
- type: manhattan_f1
value: 83.21087838578791
- type: manhattan_precision
value: 78.35605121850475
- type: manhattan_recall
value: 88.70703764320785
- type: max_accuracy
value: 82.68190018039687
- type: max_ap
value: 90.18782350103646
- type: max_f1
value: 83.64242087729039
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: map_at_1
value: 72.234
- type: map_at_10
value: 80.10000000000001
- type: map_at_100
value: 80.36
- type: map_at_1000
value: 80.363
- type: map_at_3
value: 78.315
- type: map_at_5
value: 79.607
- type: mrr_at_1
value: 72.392
- type: mrr_at_10
value: 80.117
- type: mrr_at_100
value: 80.36999999999999
- type: mrr_at_1000
value: 80.373
- type: mrr_at_3
value: 78.469
- type: mrr_at_5
value: 79.633
- type: ndcg_at_1
value: 72.392
- type: ndcg_at_10
value: 83.651
- type: ndcg_at_100
value: 84.749
- type: ndcg_at_1000
value: 84.83000000000001
- type: ndcg_at_3
value: 80.253
- type: ndcg_at_5
value: 82.485
- type: precision_at_1
value: 72.392
- type: precision_at_10
value: 9.557
- type: precision_at_100
value: 1.004
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 28.732000000000003
- type: precision_at_5
value: 18.377
- type: recall_at_1
value: 72.234
- type: recall_at_10
value: 94.573
- type: recall_at_100
value: 99.368
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 85.669
- type: recall_at_5
value: 91.01700000000001
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: map_at_1
value: 26.173999999999996
- type: map_at_10
value: 80.04
- type: map_at_100
value: 82.94500000000001
- type: map_at_1000
value: 82.98100000000001
- type: map_at_3
value: 55.562999999999995
- type: map_at_5
value: 69.89800000000001
- type: mrr_at_1
value: 89.5
- type: mrr_at_10
value: 92.996
- type: mrr_at_100
value: 93.06400000000001
- type: mrr_at_1000
value: 93.065
- type: mrr_at_3
value: 92.658
- type: mrr_at_5
value: 92.84599999999999
- type: ndcg_at_1
value: 89.5
- type: ndcg_at_10
value: 87.443
- type: ndcg_at_100
value: 90.253
- type: ndcg_at_1000
value: 90.549
- type: ndcg_at_3
value: 85.874
- type: ndcg_at_5
value: 84.842
- type: precision_at_1
value: 89.5
- type: precision_at_10
value: 41.805
- type: precision_at_100
value: 4.827
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 76.85
- type: precision_at_5
value: 64.8
- type: recall_at_1
value: 26.173999999999996
- type: recall_at_10
value: 89.101
- type: recall_at_100
value: 98.08099999999999
- type: recall_at_1000
value: 99.529
- type: recall_at_3
value: 57.902
- type: recall_at_5
value: 74.602
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: map_at_1
value: 56.10000000000001
- type: map_at_10
value: 66.15299999999999
- type: map_at_100
value: 66.625
- type: map_at_1000
value: 66.636
- type: map_at_3
value: 63.632999999999996
- type: map_at_5
value: 65.293
- type: mrr_at_1
value: 56.10000000000001
- type: mrr_at_10
value: 66.15299999999999
- type: mrr_at_100
value: 66.625
- type: mrr_at_1000
value: 66.636
- type: mrr_at_3
value: 63.632999999999996
- type: mrr_at_5
value: 65.293
- type: ndcg_at_1
value: 56.10000000000001
- type: ndcg_at_10
value: 71.146
- type: ndcg_at_100
value: 73.27799999999999
- type: ndcg_at_1000
value: 73.529
- type: ndcg_at_3
value: 66.09
- type: ndcg_at_5
value: 69.08999999999999
- type: precision_at_1
value: 56.10000000000001
- type: precision_at_10
value: 8.68
- type: precision_at_100
value: 0.964
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 24.4
- type: precision_at_5
value: 16.1
- type: recall_at_1
value: 56.10000000000001
- type: recall_at_10
value: 86.8
- type: recall_at_100
value: 96.39999999999999
- type: recall_at_1000
value: 98.3
- type: recall_at_3
value: 73.2
- type: recall_at_5
value: 80.5
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 54.52096960369373
- type: f1
value: 40.930845295808695
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 86.51031894934334
- type: ap
value: 55.9516014323483
- type: f1
value: 81.54813679326381
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 69.67437838574276
- type: cos_sim_spearman
value: 73.81314174653045
- type: euclidean_pearson
value: 72.63430276680275
- type: euclidean_spearman
value: 73.81358736777001
- type: manhattan_pearson
value: 72.58743833842829
- type: manhattan_spearman
value: 73.7590419009179
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 31.648613483640254
- type: mrr
value: 30.37420634920635
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: map_at_1
value: 73.28099999999999
- type: map_at_10
value: 81.977
- type: map_at_100
value: 82.222
- type: map_at_1000
value: 82.22699999999999
- type: map_at_3
value: 80.441
- type: map_at_5
value: 81.46600000000001
- type: mrr_at_1
value: 75.673
- type: mrr_at_10
value: 82.41000000000001
- type: mrr_at_100
value: 82.616
- type: mrr_at_1000
value: 82.621
- type: mrr_at_3
value: 81.094
- type: mrr_at_5
value: 81.962
- type: ndcg_at_1
value: 75.673
- type: ndcg_at_10
value: 85.15599999999999
- type: ndcg_at_100
value: 86.151
- type: ndcg_at_1000
value: 86.26899999999999
- type: ndcg_at_3
value: 82.304
- type: ndcg_at_5
value: 84.009
- type: precision_at_1
value: 75.673
- type: precision_at_10
value: 10.042
- type: precision_at_100
value: 1.052
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 30.673000000000002
- type: precision_at_5
value: 19.326999999999998
- type: recall_at_1
value: 73.28099999999999
- type: recall_at_10
value: 94.446
- type: recall_at_100
value: 98.737
- type: recall_at_1000
value: 99.649
- type: recall_at_3
value: 86.984
- type: recall_at_5
value: 91.024
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.08607935440484
- type: f1
value: 78.24879986066307
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.05917955615332
- type: f1
value: 85.05279279434997
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: map_at_1
value: 56.2
- type: map_at_10
value: 62.57899999999999
- type: map_at_100
value: 63.154999999999994
- type: map_at_1000
value: 63.193
- type: map_at_3
value: 61.217
- type: map_at_5
value: 62.012
- type: mrr_at_1
value: 56.3
- type: mrr_at_10
value: 62.629000000000005
- type: mrr_at_100
value: 63.205999999999996
- type: mrr_at_1000
value: 63.244
- type: mrr_at_3
value: 61.267
- type: mrr_at_5
value: 62.062
- type: ndcg_at_1
value: 56.2
- type: ndcg_at_10
value: 65.592
- type: ndcg_at_100
value: 68.657
- type: ndcg_at_1000
value: 69.671
- type: ndcg_at_3
value: 62.808
- type: ndcg_at_5
value: 64.24499999999999
- type: precision_at_1
value: 56.2
- type: precision_at_10
value: 7.5
- type: precision_at_100
value: 0.899
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 22.467000000000002
- type: precision_at_5
value: 14.180000000000001
- type: recall_at_1
value: 56.2
- type: recall_at_10
value: 75.0
- type: recall_at_100
value: 89.9
- type: recall_at_1000
value: 97.89999999999999
- type: recall_at_3
value: 67.4
- type: recall_at_5
value: 70.89999999999999
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 76.87666666666667
- type: f1
value: 76.7317686219665
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 79.64266377910124
- type: cos_sim_ap
value: 84.78274442344829
- type: cos_sim_f1
value: 81.16947472745292
- type: cos_sim_precision
value: 76.47058823529412
- type: cos_sim_recall
value: 86.48363252375924
- type: dot_accuracy
value: 79.64266377910124
- type: dot_ap
value: 84.7851404063692
- type: dot_f1
value: 81.16947472745292
- type: dot_precision
value: 76.47058823529412
- type: dot_recall
value: 86.48363252375924
- type: euclidean_accuracy
value: 79.64266377910124
- type: euclidean_ap
value: 84.78068373762378
- type: euclidean_f1
value: 81.14794656110837
- type: euclidean_precision
value: 76.35009310986965
- type: euclidean_recall
value: 86.58922914466737
- type: manhattan_accuracy
value: 79.48023822414727
- type: manhattan_ap
value: 84.72928897427576
- type: manhattan_f1
value: 81.32084770823064
- type: manhattan_precision
value: 76.24768946395564
- type: manhattan_recall
value: 87.11721224920802
- type: max_accuracy
value: 79.64266377910124
- type: max_ap
value: 84.7851404063692
- type: max_f1
value: 81.32084770823064
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 94.3
- type: ap
value: 92.8664032274438
- type: f1
value: 94.29311102997727
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 48.51392279882909
- type: cos_sim_spearman
value: 54.06338895994974
- type: euclidean_pearson
value: 52.58480559573412
- type: euclidean_spearman
value: 54.06417276612201
- type: manhattan_pearson
value: 52.69525121721343
- type: manhattan_spearman
value: 54.048147455389675
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 29.728387290757325
- type: cos_sim_spearman
value: 31.366121633635284
- type: euclidean_pearson
value: 29.14588368552961
- type: euclidean_spearman
value: 31.36764411112844
- type: manhattan_pearson
value: 29.63517350523121
- type: manhattan_spearman
value: 31.94157020583762
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 63.64868296271406
- type: cos_sim_spearman
value: 66.12800618164744
- type: euclidean_pearson
value: 63.21405767340238
- type: euclidean_spearman
value: 66.12786567790748
- type: manhattan_pearson
value: 64.04300276525848
- type: manhattan_spearman
value: 66.5066857145652
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 81.2302623912794
- type: cos_sim_spearman
value: 81.16833673266562
- type: euclidean_pearson
value: 79.47647843876024
- type: euclidean_spearman
value: 81.16944349524972
- type: manhattan_pearson
value: 79.84947238492208
- type: manhattan_spearman
value: 81.64626599410026
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 67.80129586475687
- type: mrr
value: 77.77402311635554
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: map_at_1
value: 28.666999999999998
- type: map_at_10
value: 81.063
- type: map_at_100
value: 84.504
- type: map_at_1000
value: 84.552
- type: map_at_3
value: 56.897
- type: map_at_5
value: 70.073
- type: mrr_at_1
value: 92.087
- type: mrr_at_10
value: 94.132
- type: mrr_at_100
value: 94.19800000000001
- type: mrr_at_1000
value: 94.19999999999999
- type: mrr_at_3
value: 93.78999999999999
- type: mrr_at_5
value: 94.002
- type: ndcg_at_1
value: 92.087
- type: ndcg_at_10
value: 87.734
- type: ndcg_at_100
value: 90.736
- type: ndcg_at_1000
value: 91.184
- type: ndcg_at_3
value: 88.78
- type: ndcg_at_5
value: 87.676
- type: precision_at_1
value: 92.087
- type: precision_at_10
value: 43.46
- type: precision_at_100
value: 5.07
- type: precision_at_1000
value: 0.518
- type: precision_at_3
value: 77.49000000000001
- type: precision_at_5
value: 65.194
- type: recall_at_1
value: 28.666999999999998
- type: recall_at_10
value: 86.632
- type: recall_at_100
value: 96.646
- type: recall_at_1000
value: 98.917
- type: recall_at_3
value: 58.333999999999996
- type: recall_at_5
value: 72.974
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 52.971999999999994
- type: f1
value: 50.2898280984929
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 86.0797948663824
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 85.10759092255017
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: map_at_1
value: 65.60000000000001
- type: map_at_10
value: 74.773
- type: map_at_100
value: 75.128
- type: map_at_1000
value: 75.136
- type: map_at_3
value: 73.05
- type: map_at_5
value: 74.13499999999999
- type: mrr_at_1
value: 65.60000000000001
- type: mrr_at_10
value: 74.773
- type: mrr_at_100
value: 75.128
- type: mrr_at_1000
value: 75.136
- type: mrr_at_3
value: 73.05
- type: mrr_at_5
value: 74.13499999999999
- type: ndcg_at_1
value: 65.60000000000001
- type: ndcg_at_10
value: 78.84299999999999
- type: ndcg_at_100
value: 80.40899999999999
- type: ndcg_at_1000
value: 80.57
- type: ndcg_at_3
value: 75.40599999999999
- type: ndcg_at_5
value: 77.351
- type: precision_at_1
value: 65.60000000000001
- type: precision_at_10
value: 9.139999999999999
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 27.400000000000002
- type: precision_at_5
value: 17.380000000000003
- type: recall_at_1
value: 65.60000000000001
- type: recall_at_10
value: 91.4
- type: recall_at_100
value: 98.4
- type: recall_at_1000
value: 99.6
- type: recall_at_3
value: 82.19999999999999
- type: recall_at_5
value: 86.9
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 89.47
- type: ap
value: 75.59561751845389
- type: f1
value: 87.95207751382563
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: v_measure
value: 76.05592323841036
- type: v_measure
value: 64.51718058866508
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
metrics:
- type: map
value: 73.08278490943373
- type: mrr
value: 74.66561454570449
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: map_at_1
value: 38.912
- type: map_at_10
value: 52.437999999999995
- type: map_at_100
value: 53.38
- type: map_at_1000
value: 53.427
- type: map_at_3
value: 48.879
- type: map_at_5
value: 50.934000000000005
- type: mrr_at_1
value: 44.085
- type: mrr_at_10
value: 55.337
- type: mrr_at_100
value: 56.016999999999996
- type: mrr_at_1000
value: 56.043
- type: mrr_at_3
value: 52.55499999999999
- type: mrr_at_5
value: 54.20399999999999
- type: ndcg_at_1
value: 44.085
- type: ndcg_at_10
value: 58.876
- type: ndcg_at_100
value: 62.714000000000006
- type: ndcg_at_1000
value: 63.721000000000004
- type: ndcg_at_3
value: 52.444
- type: ndcg_at_5
value: 55.692
- type: precision_at_1
value: 44.085
- type: precision_at_10
value: 9.21
- type: precision_at_100
value: 1.164
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 23.043
- type: precision_at_5
value: 15.898000000000001
- type: recall_at_1
value: 38.912
- type: recall_at_10
value: 75.577
- type: recall_at_100
value: 92.038
- type: recall_at_1000
value: 99.325
- type: recall_at_3
value: 58.592
- type: recall_at_5
value: 66.235
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 55.532000000000004
- type: f1
value: 52.5783943471605
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: map_at_1
value: 8.108
- type: map_at_10
value: 14.710999999999999
- type: map_at_100
value: 15.891
- type: map_at_1000
value: 15.983
- type: map_at_3
value: 12.237
- type: map_at_5
value: 13.679
- type: mrr_at_1
value: 8.108
- type: mrr_at_10
value: 14.710999999999999
- type: mrr_at_100
value: 15.891
- type: mrr_at_1000
value: 15.983
- type: mrr_at_3
value: 12.237
- type: mrr_at_5
value: 13.679
- type: ndcg_at_1
value: 8.108
- type: ndcg_at_10
value: 18.796
- type: ndcg_at_100
value: 25.098
- type: ndcg_at_1000
value: 27.951999999999998
- type: ndcg_at_3
value: 13.712
- type: ndcg_at_5
value: 16.309
- type: precision_at_1
value: 8.108
- type: precision_at_10
value: 3.198
- type: precision_at_100
value: 0.626
- type: precision_at_1000
value: 0.086
- type: precision_at_3
value: 6.006
- type: precision_at_5
value: 4.865
- type: recall_at_1
value: 8.108
- type: recall_at_10
value: 31.982
- type: recall_at_100
value: 62.613
- type: recall_at_1000
value: 86.036
- type: recall_at_3
value: 18.018
- type: recall_at_5
value: 24.324
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: v_measure
value: 30.833269778867116
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P
type: mlsum
config: default
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: v_measure
value: 50.0281928004713
- type: v_measure
value: 43.699961510636534
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.68963357344191
- type: f1
value: 96.45175170820961
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 87.46946445349202
- type: f1
value: 65.79860440988624
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 82.60663507109005
- type: f1
value: 77.20462646604777
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 60.19311264967803
- type: v_measure
value: 63.6235764409785
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.65097511768661
- type: f1
value: 78.77796091490924
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.64425016812373
- type: f1
value: 85.4912728670017
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: map_at_1
value: 35.913000000000004
- type: map_at_10
value: 48.147
- type: map_at_100
value: 48.91
- type: map_at_1000
value: 48.949
- type: map_at_3
value: 45.269999999999996
- type: map_at_5
value: 47.115
- type: mrr_at_1
value: 35.913000000000004
- type: mrr_at_10
value: 48.147
- type: mrr_at_100
value: 48.91
- type: mrr_at_1000
value: 48.949
- type: mrr_at_3
value: 45.269999999999996
- type: mrr_at_5
value: 47.115
- type: ndcg_at_1
value: 35.913000000000004
- type: ndcg_at_10
value: 54.03
- type: ndcg_at_100
value: 57.839
- type: ndcg_at_1000
value: 58.925000000000004
- type: ndcg_at_3
value: 48.217999999999996
- type: ndcg_at_5
value: 51.56699999999999
- type: precision_at_1
value: 35.913000000000004
- type: precision_at_10
value: 7.244000000000001
- type: precision_at_100
value: 0.9039999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 18.905
- type: precision_at_5
value: 12.981000000000002
- type: recall_at_1
value: 35.913000000000004
- type: recall_at_10
value: 72.441
- type: recall_at_100
value: 90.41799999999999
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 56.716
- type: recall_at_5
value: 64.90599999999999
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
- task:
type: PairClassification
dataset:
name: MTEB PawsX (fr)
type: paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 75.25
- type: cos_sim_ap
value: 80.86376001270014
- type: cos_sim_f1
value: 73.65945437441204
- type: cos_sim_precision
value: 64.02289452166802
- type: cos_sim_recall
value: 86.71096345514951
- type: dot_accuracy
value: 75.25
- type: dot_ap
value: 80.93686107633002
- type: dot_f1
value: 73.65945437441204
- type: dot_precision
value: 64.02289452166802
- type: dot_recall
value: 86.71096345514951
- type: euclidean_accuracy
value: 75.25
- type: euclidean_ap
value: 80.86379136218862
- type: euclidean_f1
value: 73.65945437441204
- type: euclidean_precision
value: 64.02289452166802
- type: euclidean_recall
value: 86.71096345514951
- type: manhattan_accuracy
value: 75.3
- type: manhattan_ap
value: 80.87826606097734
- type: manhattan_f1
value: 73.68421052631581
- type: manhattan_precision
value: 64.0
- type: manhattan_recall
value: 86.82170542635659
- type: max_accuracy
value: 75.3
- type: max_ap
value: 80.93686107633002
- type: max_f1
value: 73.68421052631581
- task:
type: STS
dataset:
name: MTEB SICKFr
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cos_sim_pearson
value: 81.42349425981143
- type: cos_sim_spearman
value: 78.90454327031226
- type: euclidean_pearson
value: 78.39086497435166
- type: euclidean_spearman
value: 78.9046133980509
- type: manhattan_pearson
value: 78.63743094286502
- type: manhattan_spearman
value: 79.12136348449269
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 81.452697919749
- type: cos_sim_spearman
value: 82.58116836039301
- type: euclidean_pearson
value: 81.04038478932786
- type: euclidean_spearman
value: 82.58116836039301
- type: manhattan_pearson
value: 81.37075396187771
- type: manhattan_spearman
value: 82.73678231355368
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: stsb_multi_mt
config: fr
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 85.7419764013806
- type: cos_sim_spearman
value: 85.46085808849622
- type: euclidean_pearson
value: 83.70449639870063
- type: euclidean_spearman
value: 85.46159013076233
- type: manhattan_pearson
value: 83.95259510313929
- type: manhattan_spearman
value: 85.8029724659458
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cos_sim_pearson
value: 32.61063271753325
- type: cos_sim_spearman
value: 31.454589417353603
- type: dot_pearson
value: 32.6106288643431
- type: dot_spearman
value: 31.454589417353603
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: b205c5084a0934ce8af14338bf03feb19499c84d
metrics:
- type: map
value: 84.31666666666666
- type: mrr
value: 84.31666666666666
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
metrics:
- type: map_at_1
value: 63.0
- type: map_at_10
value: 73.471
- type: map_at_100
value: 73.87
- type: map_at_1000
value: 73.87
- type: map_at_3
value: 70.5
- type: map_at_5
value: 73.05
- type: mrr_at_1
value: 63.0
- type: mrr_at_10
value: 73.471
- type: mrr_at_100
value: 73.87
- type: mrr_at_1000
value: 73.87
- type: mrr_at_3
value: 70.5
- type: mrr_at_5
value: 73.05
- type: ndcg_at_1
value: 63.0
- type: ndcg_at_10
value: 78.255
- type: ndcg_at_100
value: 79.88
- type: ndcg_at_1000
value: 79.88
- type: ndcg_at_3
value: 72.702
- type: ndcg_at_5
value: 77.264
- type: precision_at_1
value: 63.0
- type: precision_at_10
value: 9.3
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 26.333000000000002
- type: precision_at_5
value: 18.0
- type: recall_at_1
value: 63.0
- type: recall_at_10
value: 93.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 79.0
- type: recall_at_5
value: 90.0
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fr
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: map_at_1
value: 40.338
- type: map_at_10
value: 61.927
- type: map_at_100
value: 63.361999999999995
- type: map_at_1000
value: 63.405
- type: map_at_3
value: 55.479
- type: map_at_5
value: 59.732
- type: mrr_at_1
value: 63.551
- type: mrr_at_10
value: 71.006
- type: mrr_at_100
value: 71.501
- type: mrr_at_1000
value: 71.509
- type: mrr_at_3
value: 69.07
- type: mrr_at_5
value: 70.165
- type: ndcg_at_1
value: 63.551
- type: ndcg_at_10
value: 68.297
- type: ndcg_at_100
value: 73.13199999999999
- type: ndcg_at_1000
value: 73.751
- type: ndcg_at_3
value: 62.999
- type: ndcg_at_5
value: 64.89
- type: precision_at_1
value: 63.551
- type: precision_at_10
value: 15.661
- type: precision_at_100
value: 1.9789999999999999
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 38.273
- type: precision_at_5
value: 27.61
- type: recall_at_1
value: 40.338
- type: recall_at_10
value: 77.267
- type: recall_at_100
value: 95.892
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 60.36
- type: recall_at_5
value: 68.825
- task:
type: Clustering
dataset:
name: MTEB 8TagsClustering
type: PL-MTEB/8tags-clustering
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 51.36126303874126
- task:
type: Classification
dataset:
name: MTEB AllegroReviews
type: PL-MTEB/allegro-reviews
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 67.13717693836979
- type: f1
value: 57.27609848003782
- task:
type: Retrieval
dataset:
name: MTEB ArguAna-PL
type: clarin-knext/arguana-pl
config: default
split: test
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
metrics:
- type: map_at_1
value: 35.276999999999994
- type: map_at_10
value: 51.086
- type: map_at_100
value: 51.788000000000004
- type: map_at_1000
value: 51.791
- type: map_at_3
value: 46.147
- type: map_at_5
value: 49.078
- type: mrr_at_1
value: 35.917
- type: mrr_at_10
value: 51.315999999999995
- type: mrr_at_100
value: 52.018
- type: mrr_at_1000
value: 52.022
- type: mrr_at_3
value: 46.349000000000004
- type: mrr_at_5
value: 49.297000000000004
- type: ndcg_at_1
value: 35.276999999999994
- type: ndcg_at_10
value: 59.870999999999995
- type: ndcg_at_100
value: 62.590999999999994
- type: ndcg_at_1000
value: 62.661
- type: ndcg_at_3
value: 49.745
- type: ndcg_at_5
value: 55.067
- type: precision_at_1
value: 35.276999999999994
- type: precision_at_10
value: 8.791
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.057
- type: precision_at_5
value: 14.637
- type: recall_at_1
value: 35.276999999999994
- type: recall_at_10
value: 87.909
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 60.171
- type: recall_at_5
value: 73.18599999999999
- task:
type: Classification
dataset:
name: MTEB CBD
type: PL-MTEB/cbd
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 78.03000000000002
- type: ap
value: 29.12548553897622
- type: f1
value: 66.54857118886073
- task:
type: PairClassification
dataset:
name: MTEB CDSC-E
type: PL-MTEB/cdsce-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 89.0
- type: cos_sim_ap
value: 76.75437826834582
- type: cos_sim_f1
value: 66.4850136239782
- type: cos_sim_precision
value: 68.92655367231639
- type: cos_sim_recall
value: 64.21052631578948
- type: dot_accuracy
value: 89.0
- type: dot_ap
value: 76.75437826834582
- type: dot_f1
value: 66.4850136239782
- type: dot_precision
value: 68.92655367231639
- type: dot_recall
value: 64.21052631578948
- type: euclidean_accuracy
value: 89.0
- type: euclidean_ap
value: 76.75437826834582
- type: euclidean_f1
value: 66.4850136239782
- type: euclidean_precision
value: 68.92655367231639
- type: euclidean_recall
value: 64.21052631578948
- type: manhattan_accuracy
value: 89.0
- type: manhattan_ap
value: 76.66074220647083
- type: manhattan_f1
value: 66.47058823529412
- type: manhattan_precision
value: 75.33333333333333
- type: manhattan_recall
value: 59.473684210526315
- type: max_accuracy
value: 89.0
- type: max_ap
value: 76.75437826834582
- type: max_f1
value: 66.4850136239782
- task:
type: STS
dataset:
name: MTEB CDSC-R
type: PL-MTEB/cdscr-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 93.12903172428328
- type: cos_sim_spearman
value: 92.66381487060741
- type: euclidean_pearson
value: 90.37278396708922
- type: euclidean_spearman
value: 92.66381487060741
- type: manhattan_pearson
value: 90.32503296540962
- type: manhattan_spearman
value: 92.6902938354313
- task:
type: Retrieval
dataset:
name: MTEB DBPedia-PL
type: clarin-knext/dbpedia-pl
config: default
split: test
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
metrics:
- type: map_at_1
value: 8.83
- type: map_at_10
value: 18.326
- type: map_at_100
value: 26.496
- type: map_at_1000
value: 28.455000000000002
- type: map_at_3
value: 12.933
- type: map_at_5
value: 15.168000000000001
- type: mrr_at_1
value: 66.0
- type: mrr_at_10
value: 72.76700000000001
- type: mrr_at_100
value: 73.203
- type: mrr_at_1000
value: 73.219
- type: mrr_at_3
value: 71.458
- type: mrr_at_5
value: 72.246
- type: ndcg_at_1
value: 55.375
- type: ndcg_at_10
value: 41.3
- type: ndcg_at_100
value: 45.891
- type: ndcg_at_1000
value: 52.905
- type: ndcg_at_3
value: 46.472
- type: ndcg_at_5
value: 43.734
- type: precision_at_1
value: 66.0
- type: precision_at_10
value: 33.074999999999996
- type: precision_at_100
value: 11.094999999999999
- type: precision_at_1000
value: 2.374
- type: precision_at_3
value: 48.583
- type: precision_at_5
value: 42.0
- type: recall_at_1
value: 8.83
- type: recall_at_10
value: 22.587
- type: recall_at_100
value: 50.61600000000001
- type: recall_at_1000
value: 73.559
- type: recall_at_3
value: 13.688
- type: recall_at_5
value: 16.855
- task:
type: Retrieval
dataset:
name: MTEB FiQA-PL
type: clarin-knext/fiqa-pl
config: default
split: test
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
metrics:
- type: map_at_1
value: 20.587
- type: map_at_10
value: 33.095
- type: map_at_100
value: 35.24
- type: map_at_1000
value: 35.429
- type: map_at_3
value: 28.626
- type: map_at_5
value: 31.136999999999997
- type: mrr_at_1
value: 40.586
- type: mrr_at_10
value: 49.033
- type: mrr_at_100
value: 49.952999999999996
- type: mrr_at_1000
value: 49.992
- type: mrr_at_3
value: 46.553
- type: mrr_at_5
value: 48.035
- type: ndcg_at_1
value: 40.586
- type: ndcg_at_10
value: 41.046
- type: ndcg_at_100
value: 48.586
- type: ndcg_at_1000
value: 51.634
- type: ndcg_at_3
value: 36.773
- type: ndcg_at_5
value: 38.389
- type: precision_at_1
value: 40.586
- type: precision_at_10
value: 11.466
- type: precision_at_100
value: 1.909
- type: precision_at_1000
value: 0.245
- type: precision_at_3
value: 24.434
- type: precision_at_5
value: 18.426000000000002
- type: recall_at_1
value: 20.587
- type: recall_at_10
value: 47.986000000000004
- type: recall_at_100
value: 75.761
- type: recall_at_1000
value: 94.065
- type: recall_at_3
value: 33.339
- type: recall_at_5
value: 39.765
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA-PL
type: clarin-knext/hotpotqa-pl
config: default
split: test
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
metrics:
- type: map_at_1
value: 40.878
- type: map_at_10
value: 58.775999999999996
- type: map_at_100
value: 59.632
- type: map_at_1000
value: 59.707
- type: map_at_3
value: 56.074
- type: map_at_5
value: 57.629
- type: mrr_at_1
value: 81.756
- type: mrr_at_10
value: 86.117
- type: mrr_at_100
value: 86.299
- type: mrr_at_1000
value: 86.30600000000001
- type: mrr_at_3
value: 85.345
- type: mrr_at_5
value: 85.832
- type: ndcg_at_1
value: 81.756
- type: ndcg_at_10
value: 67.608
- type: ndcg_at_100
value: 70.575
- type: ndcg_at_1000
value: 71.99600000000001
- type: ndcg_at_3
value: 63.723
- type: ndcg_at_5
value: 65.70700000000001
- type: precision_at_1
value: 81.756
- type: precision_at_10
value: 13.619
- type: precision_at_100
value: 1.5939999999999999
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 39.604
- type: precision_at_5
value: 25.332
- type: recall_at_1
value: 40.878
- type: recall_at_10
value: 68.096
- type: recall_at_100
value: 79.696
- type: recall_at_1000
value: 89.082
- type: recall_at_3
value: 59.406000000000006
- type: recall_at_5
value: 63.329
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO-PL
type: clarin-knext/msmarco-pl
config: default
split: test
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
metrics:
- type: map_at_1
value: 2.1839999999999997
- type: map_at_10
value: 11.346
- type: map_at_100
value: 30.325000000000003
- type: map_at_1000
value: 37.806
- type: map_at_3
value: 4.842
- type: map_at_5
value: 6.891
- type: mrr_at_1
value: 86.047
- type: mrr_at_10
value: 89.14699999999999
- type: mrr_at_100
value: 89.46600000000001
- type: mrr_at_1000
value: 89.46600000000001
- type: mrr_at_3
value: 89.14699999999999
- type: mrr_at_5
value: 89.14699999999999
- type: ndcg_at_1
value: 67.829
- type: ndcg_at_10
value: 62.222
- type: ndcg_at_100
value: 55.337
- type: ndcg_at_1000
value: 64.076
- type: ndcg_at_3
value: 68.12700000000001
- type: ndcg_at_5
value: 64.987
- type: precision_at_1
value: 86.047
- type: precision_at_10
value: 69.535
- type: precision_at_100
value: 32.93
- type: precision_at_1000
value: 6.6049999999999995
- type: precision_at_3
value: 79.845
- type: precision_at_5
value: 75.349
- type: recall_at_1
value: 2.1839999999999997
- type: recall_at_10
value: 12.866
- type: recall_at_100
value: 43.505
- type: recall_at_1000
value: 72.366
- type: recall_at_3
value: 4.947
- type: recall_at_5
value: 7.192
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 80.75319435104238
- type: f1
value: 77.58961444860606
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 85.54472091459313
- type: f1
value: 84.29498563572106
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus-PL
type: clarin-knext/nfcorpus-pl
config: default
split: test
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
metrics:
- type: map_at_1
value: 4.367
- type: map_at_10
value: 10.38
- type: map_at_100
value: 13.516
- type: map_at_1000
value: 14.982000000000001
- type: map_at_3
value: 7.367
- type: map_at_5
value: 8.59
- type: mrr_at_1
value: 41.486000000000004
- type: mrr_at_10
value: 48.886
- type: mrr_at_100
value: 49.657000000000004
- type: mrr_at_1000
value: 49.713
- type: mrr_at_3
value: 46.904
- type: mrr_at_5
value: 48.065000000000005
- type: ndcg_at_1
value: 40.402
- type: ndcg_at_10
value: 30.885
- type: ndcg_at_100
value: 28.393
- type: ndcg_at_1000
value: 37.428
- type: ndcg_at_3
value: 35.394999999999996
- type: ndcg_at_5
value: 33.391999999999996
- type: precision_at_1
value: 41.486000000000004
- type: precision_at_10
value: 23.437
- type: precision_at_100
value: 7.638
- type: precision_at_1000
value: 2.0389999999999997
- type: precision_at_3
value: 32.817
- type: precision_at_5
value: 28.915999999999997
- type: recall_at_1
value: 4.367
- type: recall_at_10
value: 14.655000000000001
- type: recall_at_100
value: 29.665999999999997
- type: recall_at_1000
value: 62.073
- type: recall_at_3
value: 8.51
- type: recall_at_5
value: 10.689
- task:
type: Retrieval
dataset:
name: MTEB NQ-PL
type: clarin-knext/nq-pl
config: default
split: test
revision: f171245712cf85dd4700b06bef18001578d0ca8d
metrics:
- type: map_at_1
value: 28.616000000000003
- type: map_at_10
value: 41.626000000000005
- type: map_at_100
value: 42.689
- type: map_at_1000
value: 42.733
- type: map_at_3
value: 37.729
- type: map_at_5
value: 39.879999999999995
- type: mrr_at_1
value: 32.068000000000005
- type: mrr_at_10
value: 44.029
- type: mrr_at_100
value: 44.87
- type: mrr_at_1000
value: 44.901
- type: mrr_at_3
value: 40.687
- type: mrr_at_5
value: 42.625
- type: ndcg_at_1
value: 32.068000000000005
- type: ndcg_at_10
value: 48.449999999999996
- type: ndcg_at_100
value: 53.13
- type: ndcg_at_1000
value: 54.186
- type: ndcg_at_3
value: 40.983999999999995
- type: ndcg_at_5
value: 44.628
- type: precision_at_1
value: 32.068000000000005
- type: precision_at_10
value: 7.9750000000000005
- type: precision_at_100
value: 1.061
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 18.404999999999998
- type: precision_at_5
value: 13.111
- type: recall_at_1
value: 28.616000000000003
- type: recall_at_10
value: 66.956
- type: recall_at_100
value: 87.657
- type: recall_at_1000
value: 95.548
- type: recall_at_3
value: 47.453
- type: recall_at_5
value: 55.87800000000001
- task:
type: Classification
dataset:
name: MTEB PAC
type: laugustyniak/abusive-clauses-pl
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.47589122111044
- type: f1
value: 66.6332277374775
- task:
type: PairClassification
dataset:
name: MTEB PPC
type: PL-MTEB/ppc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.4
- type: cos_sim_ap
value: 94.1044939667201
- type: cos_sim_f1
value: 88.78048780487805
- type: cos_sim_precision
value: 87.22044728434504
- type: cos_sim_recall
value: 90.39735099337747
- type: dot_accuracy
value: 86.4
- type: dot_ap
value: 94.1044939667201
- type: dot_f1
value: 88.78048780487805
- type: dot_precision
value: 87.22044728434504
- type: dot_recall
value: 90.39735099337747
- type: euclidean_accuracy
value: 86.4
- type: euclidean_ap
value: 94.1044939667201
- type: euclidean_f1
value: 88.78048780487805
- type: euclidean_precision
value: 87.22044728434504
- type: euclidean_recall
value: 90.39735099337747
- type: manhattan_accuracy
value: 86.4
- type: manhattan_ap
value: 94.11438365697387
- type: manhattan_f1
value: 88.77968877968877
- type: manhattan_precision
value: 87.84440842787681
- type: manhattan_recall
value: 89.73509933774835
- type: max_accuracy
value: 86.4
- type: max_ap
value: 94.11438365697387
- type: max_f1
value: 88.78048780487805
- task:
type: PairClassification
dataset:
name: MTEB PSC
type: PL-MTEB/psc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 97.86641929499072
- type: cos_sim_ap
value: 99.36904211868182
- type: cos_sim_f1
value: 96.56203288490283
- type: cos_sim_precision
value: 94.72140762463343
- type: cos_sim_recall
value: 98.47560975609755
- type: dot_accuracy
value: 97.86641929499072
- type: dot_ap
value: 99.36904211868183
- type: dot_f1
value: 96.56203288490283
- type: dot_precision
value: 94.72140762463343
- type: dot_recall
value: 98.47560975609755
- type: euclidean_accuracy
value: 97.86641929499072
- type: euclidean_ap
value: 99.36904211868183
- type: euclidean_f1
value: 96.56203288490283
- type: euclidean_precision
value: 94.72140762463343
- type: euclidean_recall
value: 98.47560975609755
- type: manhattan_accuracy
value: 98.14471243042672
- type: manhattan_ap
value: 99.43359540492416
- type: manhattan_f1
value: 96.98795180722892
- type: manhattan_precision
value: 95.83333333333334
- type: manhattan_recall
value: 98.17073170731707
- type: max_accuracy
value: 98.14471243042672
- type: max_ap
value: 99.43359540492416
- type: max_f1
value: 96.98795180722892
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-IN
type: PL-MTEB/polemo2_in
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 89.39058171745152
- type: f1
value: 86.8552093529568
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-OUT
type: PL-MTEB/polemo2_out
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 74.97975708502024
- type: f1
value: 58.73081628832407
- task:
type: Retrieval
dataset:
name: MTEB Quora-PL
type: clarin-knext/quora-pl
config: default
split: test
revision: 0be27e93455051e531182b85e85e425aba12e9d4
metrics:
- type: map_at_1
value: 64.917
- type: map_at_10
value: 78.74600000000001
- type: map_at_100
value: 79.501
- type: map_at_1000
value: 79.524
- type: map_at_3
value: 75.549
- type: map_at_5
value: 77.495
- type: mrr_at_1
value: 74.9
- type: mrr_at_10
value: 82.112
- type: mrr_at_100
value: 82.314
- type: mrr_at_1000
value: 82.317
- type: mrr_at_3
value: 80.745
- type: mrr_at_5
value: 81.607
- type: ndcg_at_1
value: 74.83999999999999
- type: ndcg_at_10
value: 83.214
- type: ndcg_at_100
value: 84.997
- type: ndcg_at_1000
value: 85.207
- type: ndcg_at_3
value: 79.547
- type: ndcg_at_5
value: 81.46600000000001
- type: precision_at_1
value: 74.83999999999999
- type: precision_at_10
value: 12.822
- type: precision_at_100
value: 1.506
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 34.903
- type: precision_at_5
value: 23.16
- type: recall_at_1
value: 64.917
- type: recall_at_10
value: 92.27199999999999
- type: recall_at_100
value: 98.715
- type: recall_at_1000
value: 99.854
- type: recall_at_3
value: 82.04599999999999
- type: recall_at_5
value: 87.2
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS-PL
type: clarin-knext/scidocs-pl
config: default
split: test
revision: 45452b03f05560207ef19149545f168e596c9337
metrics:
- type: map_at_1
value: 3.51
- type: map_at_10
value: 9.046999999999999
- type: map_at_100
value: 10.823
- type: map_at_1000
value: 11.144
- type: map_at_3
value: 6.257
- type: map_at_5
value: 7.648000000000001
- type: mrr_at_1
value: 17.299999999999997
- type: mrr_at_10
value: 27.419
- type: mrr_at_100
value: 28.618
- type: mrr_at_1000
value: 28.685
- type: mrr_at_3
value: 23.817
- type: mrr_at_5
value: 25.927
- type: ndcg_at_1
value: 17.299999999999997
- type: ndcg_at_10
value: 16.084
- type: ndcg_at_100
value: 23.729
- type: ndcg_at_1000
value: 29.476999999999997
- type: ndcg_at_3
value: 14.327000000000002
- type: ndcg_at_5
value: 13.017999999999999
- type: precision_at_1
value: 17.299999999999997
- type: precision_at_10
value: 8.63
- type: precision_at_100
value: 1.981
- type: precision_at_1000
value: 0.336
- type: precision_at_3
value: 13.4
- type: precision_at_5
value: 11.700000000000001
- type: recall_at_1
value: 3.51
- type: recall_at_10
value: 17.518
- type: recall_at_100
value: 40.275
- type: recall_at_1000
value: 68.203
- type: recall_at_3
value: 8.155
- type: recall_at_5
value: 11.875
- task:
type: PairClassification
dataset:
name: MTEB SICK-E-PL
type: PL-MTEB/sicke-pl-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.30248675091724
- type: cos_sim_ap
value: 83.6756734006714
- type: cos_sim_f1
value: 74.97367497367497
- type: cos_sim_precision
value: 73.91003460207612
- type: cos_sim_recall
value: 76.06837606837607
- type: dot_accuracy
value: 86.30248675091724
- type: dot_ap
value: 83.6756734006714
- type: dot_f1
value: 74.97367497367497
- type: dot_precision
value: 73.91003460207612
- type: dot_recall
value: 76.06837606837607
- type: euclidean_accuracy
value: 86.30248675091724
- type: euclidean_ap
value: 83.67566984333091
- type: euclidean_f1
value: 74.97367497367497
- type: euclidean_precision
value: 73.91003460207612
- type: euclidean_recall
value: 76.06837606837607
- type: manhattan_accuracy
value: 86.28210354667753
- type: manhattan_ap
value: 83.64216119130171
- type: manhattan_f1
value: 74.92152075340078
- type: manhattan_precision
value: 73.4107997265892
- type: manhattan_recall
value: 76.49572649572649
- type: max_accuracy
value: 86.30248675091724
- type: max_ap
value: 83.6756734006714
- type: max_f1
value: 74.97367497367497
- task:
type: STS
dataset:
name: MTEB SICK-R-PL
type: PL-MTEB/sickr-pl-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 82.23295940859121
- type: cos_sim_spearman
value: 78.89329160768719
- type: euclidean_pearson
value: 79.56019107076818
- type: euclidean_spearman
value: 78.89330209904084
- type: manhattan_pearson
value: 79.76098513973719
- type: manhattan_spearman
value: 79.05490162570123
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 37.732606308062486
- type: cos_sim_spearman
value: 41.01645667030284
- type: euclidean_pearson
value: 26.61722556367085
- type: euclidean_spearman
value: 41.01645667030284
- type: manhattan_pearson
value: 26.60917378970807
- type: manhattan_spearman
value: 41.51335727617614
- task:
type: Retrieval
dataset:
name: MTEB SciFact-PL
type: clarin-knext/scifact-pl
config: default
split: test
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
metrics:
- type: map_at_1
value: 54.31700000000001
- type: map_at_10
value: 65.564
- type: map_at_100
value: 66.062
- type: map_at_1000
value: 66.08699999999999
- type: map_at_3
value: 62.592999999999996
- type: map_at_5
value: 63.888
- type: mrr_at_1
value: 56.99999999999999
- type: mrr_at_10
value: 66.412
- type: mrr_at_100
value: 66.85900000000001
- type: mrr_at_1000
value: 66.88
- type: mrr_at_3
value: 64.22200000000001
- type: mrr_at_5
value: 65.206
- type: ndcg_at_1
value: 56.99999999999999
- type: ndcg_at_10
value: 70.577
- type: ndcg_at_100
value: 72.879
- type: ndcg_at_1000
value: 73.45
- type: ndcg_at_3
value: 65.5
- type: ndcg_at_5
value: 67.278
- type: precision_at_1
value: 56.99999999999999
- type: precision_at_10
value: 9.667
- type: precision_at_100
value: 1.083
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.0
- type: precision_at_5
value: 16.933
- type: recall_at_1
value: 54.31700000000001
- type: recall_at_10
value: 85.056
- type: recall_at_100
value: 95.667
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 71.0
- type: recall_at_5
value: 75.672
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID-PL
type: clarin-knext/trec-covid-pl
config: default
split: test
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
metrics:
- type: map_at_1
value: 0.245
- type: map_at_10
value: 2.051
- type: map_at_100
value: 12.009
- type: map_at_1000
value: 27.448
- type: map_at_3
value: 0.721
- type: map_at_5
value: 1.13
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.0
- type: mrr_at_100
value: 93.0
- type: mrr_at_1000
value: 93.0
- type: mrr_at_3
value: 93.0
- type: mrr_at_5
value: 93.0
- type: ndcg_at_1
value: 85.0
- type: ndcg_at_10
value: 80.303
- type: ndcg_at_100
value: 61.23499999999999
- type: ndcg_at_1000
value: 52.978
- type: ndcg_at_3
value: 84.419
- type: ndcg_at_5
value: 82.976
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 83.39999999999999
- type: precision_at_100
value: 61.96
- type: precision_at_1000
value: 22.648
- type: precision_at_3
value: 89.333
- type: precision_at_5
value: 87.2
- type: recall_at_1
value: 0.245
- type: recall_at_10
value: 2.193
- type: recall_at_100
value: 14.938
- type: recall_at_1000
value: 48.563
- type: recall_at_3
value: 0.738
- type: recall_at_5
value: 1.173
---
## gte-Qwen2-7B-instruct
**gte-Qwen2-7B-instruct** is the latest model in the gte (General Text Embedding) model family that ranks **No.1** in both English and Chinese evaluations on the Massive Text Embedding Benchmark [MTEB benchmark](https://huggingface.co/spaces/mteb/leaderboard) (as of June 16, 2024).
Recently, the [**Qwen team**](https://huggingface.co/Qwen) released the Qwen2 series models, and we have trained the **gte-Qwen2-7B-instruct** model based on the [Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) LLM model. Compared to the [gte-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) model, the **gte-Qwen2-7B-instruct** model uses the same training data and training strategies during the finetuning stage, with the only difference being the upgraded base model to Qwen2-7B. Considering the improvements in the Qwen2 series models compared to the Qwen1.5 series, we can also expect consistent performance enhancements in the embedding models.
The model incorporates several key advancements:
- Integration of bidirectional attention mechanisms, enriching its contextual understanding.
- Instruction tuning, applied solely on the query side for streamlined efficiency
- Comprehensive training across a vast, multilingual text corpus spanning diverse domains and scenarios. This training leverages both weakly supervised and supervised data, ensuring the model's applicability across numerous languages and a wide array of downstream tasks.
## Model Information
- Model Size: 7B
- Embedding Dimension: 3584
- Max Input Tokens: 32k
## Requirements
```
transformers>=4.39.2
flash_attn>=2.5.6
```
## Usage
### Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("Alibaba-NLP/gte-Qwen2-7B-instruct", trust_remote_code=True)
# In case you want to reduce the maximum length:
model.max_seq_length = 8192
queries = [
"how much protein should a female eat",
"summit define",
]
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.",
]
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
scores = (query_embeddings @ document_embeddings.T) * 100
print(scores.tolist())
```
Observe the [config_sentence_transformers.json](config_sentence_transformers.json) to see all pre-built prompt names. Otherwise, you can use `model.encode(queries, prompt="Instruct: ...\nQuery: "` to use a custom prompt of your choice.
### Transformers
```python
import torch
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def last_token_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
if left_padding:
return last_hidden_states[:, -1]
else:
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.shape[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'how much protein should a female eat'),
get_detailed_instruct(task, 'summit define')
]
# No need to add instruction for retrieval documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/gte-Qwen2-7B-instruct', trust_remote_code=True)
model = AutoModel.from_pretrained('Alibaba-NLP/gte-Qwen2-7B-instruct', trust_remote_code=True)
max_length = 8192
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Infinity_emb
Usage via [infinity](https://github.com/michaelfeil/infinity), a MIT Licensed inference server.
```
# requires ~16-32GB VRAM NVIDIA Compute Capability >= 8.0
docker run \
-v $PWD/data:/app/.cache --gpus "0" -p "7997":"7997" \
michaelf34/infinity:0.0.68-trt-onnx \
v2 --model-id Alibaba-NLP/gte-Qwen2-7B-instruct --revision "refs/pr/38" --dtype bfloat16 --batch-size 8 --device cuda --engine torch --port 7997 --no-bettertransformer
```
## Evaluation
### MTEB & C-MTEB
You can use the [scripts/eval_mteb.py](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct/blob/main/scripts/eval_mteb.py) to reproduce the following result of **gte-Qwen2-7B-instruct** on MTEB(English)/C-MTEB(Chinese):
| Model Name | MTEB(56) | C-MTEB(35) | MTEB-fr(26) | MTEB-pl(26) |
|:----:|:---------:|:----------:|:----------:|:----------:|
| [bge-base-en-1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 64.23 | - | - | - |
| [bge-large-en-1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 63.55 | - | - | - |
| [gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 65.39 | - | - | - |
| [gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 64.11 | - | - | - |
| [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) | 64.68 | - | - | - |
| [acge_text_embedding](https://huggingface.co/aspire/acge_text_embedding) | - | 69.07 | - | - |
| [stella-mrl-large-zh-v3.5-1792d](https://huggingface.co/infgrad/stella-mrl-large-zh-v3.5-1792d) | - | 68.55 | - | - |
| [gte-large-zh](https://huggingface.co/thenlper/gte-large-zh) | - | 66.72 | - | - |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 59.45 | 56.21 | - | - |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 61.50 | 58.81 | - | - |
| [e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) | 66.63 | 60.81 | - | - |
| [gte-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | 67.34 | 69.52 | - | - |
| [NV-Embed-v1](https://huggingface.co/nvidia/NV-Embed-v1) | 69.32 | - | - | - |
| [**gte-Qwen2-7B-instruct**](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | **70.24** | **72.05** | **68.25** | **67.86** |
| gte-Qwen2-1.5B-instruc(https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) | 67.16 | 67.65 | 66.60 | 64.04 |
### GTE Models
The gte series models have consistently released two types of models: encoder-only models (based on the BERT architecture) and decode-only models (based on the LLM architecture).
| Models | Language | Max Sequence Length | Dimension | Model Size (Memory Usage, fp32) |
|:-------------------------------------------------------------------------------------:|:--------:|:-----: |:---------:|:-------------------------------:|
| [GTE-large-zh](https://huggingface.co/thenlper/gte-large-zh) | Chinese | 512 | 1024 | 1.25GB |
| [GTE-base-zh](https://huggingface.co/thenlper/gte-base-zh) | Chinese | 512 | 512 | 0.41GB |
| [GTE-small-zh](https://huggingface.co/thenlper/gte-small-zh) | Chinese | 512 | 512 | 0.12GB |
| [GTE-large](https://huggingface.co/thenlper/gte-large) | English | 512 | 1024 | 1.25GB |
| [GTE-base](https://huggingface.co/thenlper/gte-base) | English | 512 | 512 | 0.21GB |
| [GTE-small](https://huggingface.co/thenlper/gte-small) | English | 512 | 384 | 0.10GB |
| [GTE-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | English | 8192 | 1024 | 1.74GB |
| [GTE-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | English | 8192 | 768 | 0.51GB |
| [GTE-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | Multilingual | 32000 | 4096 | 26.45GB |
| [GTE-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | Multilingual | 32000 | 3584 | 26.45GB |
| [GTE-Qwen2-1.5B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) | Multilingual | 32000 | 1536 | 6.62GB |
## Cloud API Services
In addition to the open-source [GTE](https://huggingface.co/collections/Alibaba-NLP/gte-models-6680f0b13f885cb431e6d469) series models, GTE series models are also available as commercial API services on Alibaba Cloud.
- [Embedding Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-embedding/): Three versions of the text embedding models are available: text-embedding-v1/v2/v3, with v3 being the latest API service.
- [ReRank Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-sorting-model/): The gte-rerank model service is available.
Note that the models behind the commercial APIs are not entirely identical to the open-source models.
## Community support
### Fine-tuning
GTE models can be fine-tuned with a third party framework SWIFT.
```shell
pip install ms-swift -U
```
```shell
# check: https://swift.readthedocs.io/en/latest/BestPractices/Embedding.html
nproc_per_node=8
NPROC_PER_NODE=$nproc_per_node \
USE_HF=1 \
swift sft \
--model Alibaba-NLP/gte-Qwen2-7B-instruct \
--train_type lora \
--dataset 'sentence-transformers/stsb' \
--torch_dtype bfloat16 \
--num_train_epochs 10 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps $(expr 64 / $nproc_per_node) \
--eval_steps 100 \
--save_steps 100 \
--eval_strategy steps \
--use_chat_template false \
--save_total_limit 5 \
--logging_steps 5 \
--output_dir output \
--warmup_ratio 0.05 \
--learning_rate 5e-6 \
--deepspeed zero3 \
--dataloader_num_workers 4 \
--task_type embedding \
--loss_type cosine_similarity \
--dataloader_drop_last true
```
## Citation
If you find our paper or models helpful, please consider cite:
```
@article{li2023towards,
title={Towards general text embeddings with multi-stage contrastive learning},
author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan},
journal={arXiv preprint arXiv:2308.03281},
year={2023}
}
```
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
twadada/nmc-cls-100_correct
|
twadada
| null |
[
"mteb",
"model-index",
"region:us"
] | 1,726,213,545,000 | 2024-09-13T07:45:57 | 0 | 0 |
---
tags:
- mteb
model-index:
- name: nomic_classification_100
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: None
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 70.35820895522387
- type: ap
value: 32.749463629599404
- type: f1
value: 64.24277142151362
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: None
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 64.705075
- type: ap
value: 59.80751870729784
- type: f1
value: 64.44356439771583
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: None
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 33.642
- type: f1
value: 33.115627459191316
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: None
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 17.852
- type: map_at_10
value: 29.279
- type: map_at_100
value: 30.55
- type: map_at_1000
value: 30.605
- type: map_at_3
value: 25.296000000000003
- type: map_at_5
value: 27.498
- type: mrr_at_1
value: 18.137
- type: mrr_at_10
value: 29.398999999999997
- type: mrr_at_100
value: 30.677
- type: mrr_at_1000
value: 30.731
- type: mrr_at_3
value: 25.427
- type: mrr_at_5
value: 27.614
- type: ndcg_at_1
value: 17.852
- type: ndcg_at_10
value: 36.071999999999996
- type: ndcg_at_100
value: 42.403
- type: ndcg_at_1000
value: 43.733
- type: ndcg_at_3
value: 27.799000000000003
- type: ndcg_at_5
value: 31.805
- type: precision_at_1
value: 17.852
- type: precision_at_10
value: 5.797
- type: precision_at_100
value: 0.878
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 11.688
- type: precision_at_5
value: 8.976
- type: recall_at_1
value: 17.852
- type: recall_at_10
value: 57.965999999999994
- type: recall_at_100
value: 87.83800000000001
- type: recall_at_1000
value: 98.08
- type: recall_at_3
value: 35.064
- type: recall_at_5
value: 44.879000000000005
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: None
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 29.25407935159316
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: None
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 19.74540490543985
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: None
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 50.92680362916445
- type: mrr
value: 63.515697137580794
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: None
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 72.8794628935656
- type: cos_sim_spearman
value: 72.28899655141599
- type: euclidean_pearson
value: 72.84840274301827
- type: euclidean_spearman
value: 72.28899655141599
- type: manhattan_pearson
value: 72.27814398382203
- type: manhattan_spearman
value: 71.66970533201172
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: None
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 66.20129870129871
- type: f1
value: 65.02435616242589
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: None
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 28.56746746078776
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: None
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 19.212994376812908
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: None
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 17.7
- type: map_at_10
value: 23.182
- type: map_at_100
value: 24.2
- type: map_at_1000
value: 24.354
- type: map_at_3
value: 21.448
- type: map_at_5
value: 22.394
- type: mrr_at_1
value: 21.459
- type: mrr_at_10
value: 27.538
- type: mrr_at_100
value: 28.399
- type: mrr_at_1000
value: 28.479
- type: mrr_at_3
value: 25.775
- type: mrr_at_5
value: 26.705000000000002
- type: ndcg_at_1
value: 21.459
- type: ndcg_at_10
value: 26.987
- type: ndcg_at_100
value: 31.935999999999996
- type: ndcg_at_1000
value: 35.335
- type: ndcg_at_3
value: 24.214
- type: ndcg_at_5
value: 25.344
- type: precision_at_1
value: 21.459
- type: precision_at_10
value: 5.007000000000001
- type: precision_at_100
value: 0.9299999999999999
- type: precision_at_1000
value: 0.149
- type: precision_at_3
value: 11.445
- type: precision_at_5
value: 8.155
- type: recall_at_1
value: 17.7
- type: recall_at_10
value: 33.698
- type: recall_at_100
value: 55.933
- type: recall_at_1000
value: 79.567
- type: recall_at_3
value: 25.331
- type: recall_at_5
value: 28.681
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: None
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 13.008000000000001
- type: map_at_10
value: 17.331
- type: map_at_100
value: 18.128
- type: map_at_1000
value: 18.253
- type: map_at_3
value: 15.708
- type: map_at_5
value: 16.601
- type: mrr_at_1
value: 16.624
- type: mrr_at_10
value: 21.038999999999998
- type: mrr_at_100
value: 21.782
- type: mrr_at_1000
value: 21.869
- type: mrr_at_3
value: 19.320999999999998
- type: mrr_at_5
value: 20.266000000000002
- type: ndcg_at_1
value: 16.624
- type: ndcg_at_10
value: 20.584
- type: ndcg_at_100
value: 24.43
- type: ndcg_at_1000
value: 27.486
- type: ndcg_at_3
value: 17.724999999999998
- type: ndcg_at_5
value: 18.990000000000002
- type: precision_at_1
value: 16.624
- type: precision_at_10
value: 3.8850000000000002
- type: precision_at_100
value: 0.7250000000000001
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 8.514
- type: precision_at_5
value: 6.204
- type: recall_at_1
value: 13.008000000000001
- type: recall_at_10
value: 26.799
- type: recall_at_100
value: 43.802
- type: recall_at_1000
value: 65.035
- type: recall_at_3
value: 18.411
- type: recall_at_5
value: 21.887999999999998
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: None
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 18.459
- type: map_at_10
value: 24.775
- type: map_at_100
value: 25.691999999999997
- type: map_at_1000
value: 25.802999999999997
- type: map_at_3
value: 22.784
- type: map_at_5
value: 23.764
- type: mrr_at_1
value: 21.379
- type: mrr_at_10
value: 27.555000000000003
- type: mrr_at_100
value: 28.355000000000004
- type: mrr_at_1000
value: 28.438999999999997
- type: mrr_at_3
value: 25.663999999999998
- type: mrr_at_5
value: 26.598
- type: ndcg_at_1
value: 21.379
- type: ndcg_at_10
value: 28.691
- type: ndcg_at_100
value: 33.387
- type: ndcg_at_1000
value: 36.299
- type: ndcg_at_3
value: 24.883
- type: ndcg_at_5
value: 26.438
- type: precision_at_1
value: 21.379
- type: precision_at_10
value: 4.777
- type: precision_at_100
value: 0.7799999999999999
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 11.16
- type: precision_at_5
value: 7.7490000000000006
- type: recall_at_1
value: 18.459
- type: recall_at_10
value: 37.964999999999996
- type: recall_at_100
value: 59.728
- type: recall_at_1000
value: 81.351
- type: recall_at_3
value: 27.538
- type: recall_at_5
value: 31.464
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: None
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 8.324
- type: map_at_10
value: 10.779
- type: map_at_100
value: 11.371
- type: map_at_1000
value: 11.466999999999999
- type: map_at_3
value: 9.922
- type: map_at_5
value: 10.319
- type: mrr_at_1
value: 9.153
- type: mrr_at_10
value: 11.700000000000001
- type: mrr_at_100
value: 12.314
- type: mrr_at_1000
value: 12.406
- type: mrr_at_3
value: 10.81
- type: mrr_at_5
value: 11.234
- type: ndcg_at_1
value: 9.153
- type: ndcg_at_10
value: 12.472
- type: ndcg_at_100
value: 15.942
- type: ndcg_at_1000
value: 19.118
- type: ndcg_at_3
value: 10.644
- type: ndcg_at_5
value: 11.355
- type: precision_at_1
value: 9.153
- type: precision_at_10
value: 1.921
- type: precision_at_100
value: 0.391
- type: precision_at_1000
value: 0.07100000000000001
- type: precision_at_3
value: 4.444
- type: precision_at_5
value: 3.073
- type: recall_at_1
value: 8.324
- type: recall_at_10
value: 16.971
- type: recall_at_100
value: 34.041
- type: recall_at_1000
value: 59.45399999999999
- type: recall_at_3
value: 11.77
- type: recall_at_5
value: 13.522
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: None
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 3.998
- type: map_at_10
value: 6.22
- type: map_at_100
value: 6.687
- type: map_at_1000
value: 6.796
- type: map_at_3
value: 5.124
- type: map_at_5
value: 5.705
- type: mrr_at_1
value: 5.224
- type: mrr_at_10
value: 7.915
- type: mrr_at_100
value: 8.433
- type: mrr_at_1000
value: 8.530999999999999
- type: mrr_at_3
value: 6.654
- type: mrr_at_5
value: 7.276000000000001
- type: ndcg_at_1
value: 5.224
- type: ndcg_at_10
value: 8.238
- type: ndcg_at_100
value: 11.126999999999999
- type: ndcg_at_1000
value: 14.552999999999999
- type: ndcg_at_3
value: 6.0249999999999995
- type: ndcg_at_5
value: 6.981999999999999
- type: precision_at_1
value: 5.224
- type: precision_at_10
value: 1.7160000000000002
- type: precision_at_100
value: 0.371
- type: precision_at_1000
value: 0.078
- type: precision_at_3
value: 2.9850000000000003
- type: precision_at_5
value: 2.413
- type: recall_at_1
value: 3.998
- type: recall_at_10
value: 12.995999999999999
- type: recall_at_100
value: 26.819
- type: recall_at_1000
value: 52.608
- type: recall_at_3
value: 6.721000000000001
- type: recall_at_5
value: 9.198
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: None
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 12.331
- type: map_at_10
value: 16.913
- type: map_at_100
value: 17.841
- type: map_at_1000
value: 17.977
- type: map_at_3
value: 15.633
- type: map_at_5
value: 16.256
- type: mrr_at_1
value: 15.110999999999999
- type: mrr_at_10
value: 20.419999999999998
- type: mrr_at_100
value: 21.294
- type: mrr_at_1000
value: 21.386
- type: mrr_at_3
value: 18.961
- type: mrr_at_5
value: 19.682
- type: ndcg_at_1
value: 15.110999999999999
- type: ndcg_at_10
value: 20.115
- type: ndcg_at_100
value: 24.914
- type: ndcg_at_1000
value: 28.375
- type: ndcg_at_3
value: 17.732
- type: ndcg_at_5
value: 18.658
- type: precision_at_1
value: 15.110999999999999
- type: precision_at_10
value: 3.696
- type: precision_at_100
value: 0.762
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 8.566
- type: precision_at_5
value: 5.9670000000000005
- type: recall_at_1
value: 12.331
- type: recall_at_10
value: 26.429000000000002
- type: recall_at_100
value: 47.341
- type: recall_at_1000
value: 72.149
- type: recall_at_3
value: 19.467000000000002
- type: recall_at_5
value: 21.981
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: None
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 8.262
- type: map_at_10
value: 11.962
- type: map_at_100
value: 12.729
- type: map_at_1000
value: 12.86
- type: map_at_3
value: 10.65
- type: map_at_5
value: 11.388
- type: mrr_at_1
value: 10.502
- type: mrr_at_10
value: 14.715
- type: mrr_at_100
value: 15.484
- type: mrr_at_1000
value: 15.581999999999999
- type: mrr_at_3
value: 13.299
- type: mrr_at_5
value: 14.097999999999999
- type: ndcg_at_1
value: 10.502
- type: ndcg_at_10
value: 14.649000000000001
- type: ndcg_at_100
value: 18.738
- type: ndcg_at_1000
value: 22.456
- type: ndcg_at_3
value: 12.222
- type: ndcg_at_5
value: 13.314
- type: precision_at_1
value: 10.502
- type: precision_at_10
value: 2.82
- type: precision_at_100
value: 0.588
- type: precision_at_1000
value: 0.108
- type: precision_at_3
value: 5.936
- type: precision_at_5
value: 4.452
- type: recall_at_1
value: 8.262
- type: recall_at_10
value: 20.168
- type: recall_at_100
value: 38.405
- type: recall_at_1000
value: 65.694
- type: recall_at_3
value: 13.428999999999998
- type: recall_at_5
value: 16.229
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 10.117416666666665
- type: map_at_10
value: 13.858333333333334
- type: map_at_100
value: 14.565166666666668
- type: map_at_1000
value: 14.68266666666667
- type: map_at_3
value: 12.60983333333333
- type: map_at_5
value: 13.277416666666667
- type: mrr_at_1
value: 12.332833333333335
- type: mrr_at_10
value: 16.376333333333335
- type: mrr_at_100
value: 17.063333333333333
- type: mrr_at_1000
value: 17.1535
- type: mrr_at_3
value: 15.040666666666667
- type: mrr_at_5
value: 15.764833333333334
- type: ndcg_at_1
value: 12.332833333333335
- type: ndcg_at_10
value: 16.51366666666667
- type: ndcg_at_100
value: 20.2845
- type: ndcg_at_1000
value: 23.54025
- type: ndcg_at_3
value: 14.171250000000002
- type: ndcg_at_5
value: 15.193583333333333
- type: precision_at_1
value: 12.332833333333335
- type: precision_at_10
value: 2.983083333333333
- type: precision_at_100
value: 0.58325
- type: precision_at_1000
value: 0.10250000000000001
- type: precision_at_3
value: 6.626083333333334
- type: precision_at_5
value: 4.774916666666665
- type: recall_at_1
value: 10.117416666666665
- type: recall_at_10
value: 22.14666666666667
- type: recall_at_100
value: 39.5745
- type: recall_at_1000
value: 63.73550000000001
- type: recall_at_3
value: 15.431666666666665
- type: recall_at_5
value: 18.1215
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: None
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 7.431
- type: map_at_10
value: 10.172
- type: map_at_100
value: 10.639999999999999
- type: map_at_1000
value: 10.716000000000001
- type: map_at_3
value: 9.242
- type: map_at_5
value: 9.614
- type: mrr_at_1
value: 9.202
- type: mrr_at_10
value: 12.08
- type: mrr_at_100
value: 12.58
- type: mrr_at_1000
value: 12.649
- type: mrr_at_3
value: 11.145
- type: mrr_at_5
value: 11.59
- type: ndcg_at_1
value: 9.202
- type: ndcg_at_10
value: 12.291
- type: ndcg_at_100
value: 14.940999999999999
- type: ndcg_at_1000
value: 17.325
- type: ndcg_at_3
value: 10.446
- type: ndcg_at_5
value: 11.027000000000001
- type: precision_at_1
value: 9.202
- type: precision_at_10
value: 2.193
- type: precision_at_100
value: 0.388
- type: precision_at_1000
value: 0.065
- type: precision_at_3
value: 4.806
- type: precision_at_5
value: 3.374
- type: recall_at_1
value: 7.431
- type: recall_at_10
value: 17.197000000000003
- type: recall_at_100
value: 29.704000000000004
- type: recall_at_1000
value: 48.278999999999996
- type: recall_at_3
value: 11.616999999999999
- type: recall_at_5
value: 13.181000000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: None
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 5.348
- type: map_at_10
value: 7.591
- type: map_at_100
value: 8.109
- type: map_at_1000
value: 8.206
- type: map_at_3
value: 6.782000000000001
- type: map_at_5
value: 7.244000000000001
- type: mrr_at_1
value: 6.641
- type: mrr_at_10
value: 9.281
- type: mrr_at_100
value: 9.838
- type: mrr_at_1000
value: 9.922
- type: mrr_at_3
value: 8.286999999999999
- type: mrr_at_5
value: 8.866999999999999
- type: ndcg_at_1
value: 6.641
- type: ndcg_at_10
value: 9.302000000000001
- type: ndcg_at_100
value: 12.200999999999999
- type: ndcg_at_1000
value: 15.223999999999998
- type: ndcg_at_3
value: 7.692
- type: ndcg_at_5
value: 8.474
- type: precision_at_1
value: 6.641
- type: precision_at_10
value: 1.755
- type: precision_at_100
value: 0.388
- type: precision_at_1000
value: 0.079
- type: precision_at_3
value: 3.6249999999999996
- type: precision_at_5
value: 2.753
- type: recall_at_1
value: 5.348
- type: recall_at_10
value: 12.887
- type: recall_at_100
value: 26.391
- type: recall_at_1000
value: 49.156
- type: recall_at_3
value: 8.519
- type: recall_at_5
value: 10.431
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: None
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 7.9750000000000005
- type: map_at_10
value: 11.28
- type: map_at_100
value: 11.953
- type: map_at_1000
value: 12.051
- type: map_at_3
value: 10.022
- type: map_at_5
value: 10.807
- type: mrr_at_1
value: 9.795
- type: mrr_at_10
value: 13.544999999999998
- type: mrr_at_100
value: 14.249999999999998
- type: mrr_at_1000
value: 14.341000000000001
- type: mrr_at_3
value: 12.174
- type: mrr_at_5
value: 13.041
- type: ndcg_at_1
value: 9.795
- type: ndcg_at_10
value: 13.697000000000001
- type: ndcg_at_100
value: 17.389
- type: ndcg_at_1000
value: 20.46
- type: ndcg_at_3
value: 11.277
- type: ndcg_at_5
value: 12.579
- type: precision_at_1
value: 9.795
- type: precision_at_10
value: 2.435
- type: precision_at_100
value: 0.481
- type: precision_at_1000
value: 0.084
- type: precision_at_3
value: 5.255
- type: precision_at_5
value: 3.955
- type: recall_at_1
value: 7.9750000000000005
- type: recall_at_10
value: 18.981
- type: recall_at_100
value: 36.178
- type: recall_at_1000
value: 59.46900000000001
- type: recall_at_3
value: 12.371
- type: recall_at_5
value: 15.613
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: None
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 10.742
- type: map_at_10
value: 15.346000000000002
- type: map_at_100
value: 16.153000000000002
- type: map_at_1000
value: 16.311999999999998
- type: map_at_3
value: 14.222999999999999
- type: map_at_5
value: 14.777000000000001
- type: mrr_at_1
value: 14.032
- type: mrr_at_10
value: 18.83
- type: mrr_at_100
value: 19.564999999999998
- type: mrr_at_1000
value: 19.655
- type: mrr_at_3
value: 17.523
- type: mrr_at_5
value: 18.244
- type: ndcg_at_1
value: 14.032
- type: ndcg_at_10
value: 18.496000000000002
- type: ndcg_at_100
value: 22.377
- type: ndcg_at_1000
value: 26.284000000000002
- type: ndcg_at_3
value: 16.520000000000003
- type: ndcg_at_5
value: 17.276
- type: precision_at_1
value: 14.032
- type: precision_at_10
value: 3.5770000000000004
- type: precision_at_100
value: 0.783
- type: precision_at_1000
value: 0.16
- type: precision_at_3
value: 7.971
- type: precision_at_5
value: 5.692
- type: recall_at_1
value: 10.742
- type: recall_at_10
value: 24.157999999999998
- type: recall_at_100
value: 42.091
- type: recall_at_1000
value: 70.054
- type: recall_at_3
value: 17.916999999999998
- type: recall_at_5
value: 20.131
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: None
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 7.831
- type: map_at_10
value: 10.749
- type: map_at_100
value: 11.279
- type: map_at_1000
value: 11.397
- type: map_at_3
value: 9.78
- type: map_at_5
value: 10.459999999999999
- type: mrr_at_1
value: 8.872
- type: mrr_at_10
value: 11.898
- type: mrr_at_100
value: 12.466000000000001
- type: mrr_at_1000
value: 12.583
- type: mrr_at_3
value: 10.875
- type: mrr_at_5
value: 11.577
- type: ndcg_at_1
value: 8.872
- type: ndcg_at_10
value: 12.642000000000001
- type: ndcg_at_100
value: 16.032
- type: ndcg_at_1000
value: 19.567999999999998
- type: ndcg_at_3
value: 10.674999999999999
- type: ndcg_at_5
value: 11.886
- type: precision_at_1
value: 8.872
- type: precision_at_10
value: 2.015
- type: precision_at_100
value: 0.41200000000000003
- type: precision_at_1000
value: 0.077
- type: precision_at_3
value: 4.806
- type: precision_at_5
value: 3.512
- type: recall_at_1
value: 7.831
- type: recall_at_10
value: 17.511
- type: recall_at_100
value: 34.461000000000006
- type: recall_at_1000
value: 62.01
- type: recall_at_3
value: 12.089
- type: recall_at_5
value: 15.139
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: None
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 3.3300000000000005
- type: map_at_10
value: 5.8709999999999996
- type: map_at_100
value: 6.7860000000000005
- type: map_at_1000
value: 6.955
- type: map_at_3
value: 4.714
- type: map_at_5
value: 5.26
- type: mrr_at_1
value: 7.101
- type: mrr_at_10
value: 12.125
- type: mrr_at_100
value: 13.200000000000001
- type: mrr_at_1000
value: 13.295000000000002
- type: mrr_at_3
value: 10.119
- type: mrr_at_5
value: 11.038
- type: ndcg_at_1
value: 7.101
- type: ndcg_at_10
value: 9.159
- type: ndcg_at_100
value: 14.030000000000001
- type: ndcg_at_1000
value: 18.013
- type: ndcg_at_3
value: 6.6739999999999995
- type: ndcg_at_5
value: 7.4719999999999995
- type: precision_at_1
value: 7.101
- type: precision_at_10
value: 3.16
- type: precision_at_100
value: 0.84
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 5.081
- type: precision_at_5
value: 4.143
- type: recall_at_1
value: 3.3300000000000005
- type: recall_at_10
value: 12.215
- type: recall_at_100
value: 29.683999999999997
- type: recall_at_1000
value: 52.951
- type: recall_at_3
value: 6.356000000000001
- type: recall_at_5
value: 8.315
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: None
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 1.718
- type: map_at_10
value: 3.639
- type: map_at_100
value: 4.853
- type: map_at_1000
value: 5.219
- type: map_at_3
value: 2.6149999999999998
- type: map_at_5
value: 3.073
- type: mrr_at_1
value: 20.0
- type: mrr_at_10
value: 26.88
- type: mrr_at_100
value: 27.753
- type: mrr_at_1000
value: 27.822000000000003
- type: mrr_at_3
value: 24.667
- type: mrr_at_5
value: 25.654
- type: ndcg_at_1
value: 15.0
- type: ndcg_at_10
value: 10.878
- type: ndcg_at_100
value: 12.011
- type: ndcg_at_1000
value: 16.492
- type: ndcg_at_3
value: 12.818999999999999
- type: ndcg_at_5
value: 11.554
- type: precision_at_1
value: 20.0
- type: precision_at_10
value: 9.625
- type: precision_at_100
value: 3.037
- type: precision_at_1000
value: 0.7080000000000001
- type: precision_at_3
value: 15.082999999999998
- type: precision_at_5
value: 12.1
- type: recall_at_1
value: 1.718
- type: recall_at_10
value: 5.716
- type: recall_at_100
value: 14.266000000000002
- type: recall_at_1000
value: 30.012
- type: recall_at_3
value: 3.108
- type: recall_at_5
value: 4.181
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: None
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 41.114999999999995
- type: f1
value: 37.00141090816854
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: None
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 5.523
- type: map_at_10
value: 8.036
- type: map_at_100
value: 8.581999999999999
- type: map_at_1000
value: 8.657
- type: map_at_3
value: 7.13
- type: map_at_5
value: 7.536
- type: mrr_at_1
value: 5.836
- type: mrr_at_10
value: 8.547
- type: mrr_at_100
value: 9.123000000000001
- type: mrr_at_1000
value: 9.197
- type: mrr_at_3
value: 7.563000000000001
- type: mrr_at_5
value: 8.006
- type: ndcg_at_1
value: 5.836
- type: ndcg_at_10
value: 9.764000000000001
- type: ndcg_at_100
value: 12.866
- type: ndcg_at_1000
value: 15.243
- type: ndcg_at_3
value: 7.7700000000000005
- type: ndcg_at_5
value: 8.518
- type: precision_at_1
value: 5.836
- type: precision_at_10
value: 1.6070000000000002
- type: precision_at_100
value: 0.331
- type: precision_at_1000
value: 0.055
- type: precision_at_3
value: 3.2849999999999997
- type: precision_at_5
value: 2.37
- type: recall_at_1
value: 5.523
- type: recall_at_10
value: 14.795
- type: recall_at_100
value: 29.932
- type: recall_at_1000
value: 48.946
- type: recall_at_3
value: 9.208
- type: recall_at_5
value: 10.984
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: None
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 4.135
- type: map_at_10
value: 6.433999999999999
- type: map_at_100
value: 7.196
- type: map_at_1000
value: 7.356999999999999
- type: map_at_3
value: 5.339
- type: map_at_5
value: 5.878
- type: mrr_at_1
value: 8.796
- type: mrr_at_10
value: 12.357999999999999
- type: mrr_at_100
value: 13.208
- type: mrr_at_1000
value: 13.318
- type: mrr_at_3
value: 10.777000000000001
- type: mrr_at_5
value: 11.525
- type: ndcg_at_1
value: 8.796
- type: ndcg_at_10
value: 9.332
- type: ndcg_at_100
value: 13.517999999999999
- type: ndcg_at_1000
value: 17.907999999999998
- type: ndcg_at_3
value: 7.481999999999999
- type: ndcg_at_5
value: 8.065
- type: precision_at_1
value: 8.796
- type: precision_at_10
value: 2.8240000000000003
- type: precision_at_100
value: 0.705
- type: precision_at_1000
value: 0.14400000000000002
- type: precision_at_3
value: 4.887
- type: precision_at_5
value: 3.8580000000000005
- type: recall_at_1
value: 4.135
- type: recall_at_10
value: 12.292
- type: recall_at_100
value: 28.915999999999997
- type: recall_at_1000
value: 57.477999999999994
- type: recall_at_3
value: 6.747
- type: recall_at_5
value: 8.667
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: None
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 5.928
- type: map_at_10
value: 8.469
- type: map_at_100
value: 8.936
- type: map_at_1000
value: 9.02
- type: map_at_3
value: 7.582
- type: map_at_5
value: 8.021
- type: mrr_at_1
value: 11.857
- type: mrr_at_10
value: 15.675
- type: mrr_at_100
value: 16.273
- type: mrr_at_1000
value: 16.356
- type: mrr_at_3
value: 14.347999999999999
- type: mrr_at_5
value: 14.995
- type: ndcg_at_1
value: 11.857
- type: ndcg_at_10
value: 11.651
- type: ndcg_at_100
value: 14.374999999999998
- type: ndcg_at_1000
value: 16.912
- type: ndcg_at_3
value: 9.625
- type: ndcg_at_5
value: 10.474
- type: precision_at_1
value: 11.857
- type: precision_at_10
value: 2.777
- type: precision_at_100
value: 0.503
- type: precision_at_1000
value: 0.08499999999999999
- type: precision_at_3
value: 6.140000000000001
- type: precision_at_5
value: 4.362
- type: recall_at_1
value: 5.928
- type: recall_at_10
value: 13.883000000000001
- type: recall_at_100
value: 25.137999999999998
- type: recall_at_1000
value: 42.315999999999995
- type: recall_at_3
value: 9.21
- type: recall_at_5
value: 10.905
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: None
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 65.4388
- type: ap
value: 60.440774024423426
- type: f1
value: 65.31315753102281
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: None
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 3.4479999999999995
- type: map_at_10
value: 5.74
- type: map_at_100
value: 6.2780000000000005
- type: map_at_1000
value: 6.358999999999999
- type: map_at_3
value: 4.82
- type: map_at_5
value: 5.3
- type: mrr_at_1
value: 3.5389999999999997
- type: mrr_at_10
value: 5.906000000000001
- type: mrr_at_100
value: 6.455
- type: mrr_at_1000
value: 6.5360000000000005
- type: mrr_at_3
value: 4.9639999999999995
- type: mrr_at_5
value: 5.453
- type: ndcg_at_1
value: 3.5389999999999997
- type: ndcg_at_10
value: 7.255000000000001
- type: ndcg_at_100
value: 10.308
- type: ndcg_at_1000
value: 12.93
- type: ndcg_at_3
value: 5.314
- type: ndcg_at_5
value: 6.184
- type: precision_at_1
value: 3.5389999999999997
- type: precision_at_10
value: 1.246
- type: precision_at_100
value: 0.28500000000000003
- type: precision_at_1000
value: 0.051000000000000004
- type: precision_at_3
value: 2.297
- type: precision_at_5
value: 1.814
- type: recall_at_1
value: 3.4479999999999995
- type: recall_at_10
value: 11.982
- type: recall_at_100
value: 27.123
- type: recall_at_1000
value: 48.489
- type: recall_at_3
value: 6.607
- type: recall_at_5
value: 8.706
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: None
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 85.9484724122207
- type: f1
value: 85.39768490584245
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: None
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 58.48837209302326
- type: f1
value: 39.10849416181491
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: None
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.632145258910555
- type: f1
value: 58.09773014884143
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: None
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.68325487558843
- type: f1
value: 65.91204845805859
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: None
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 26.41069242141184
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: None
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 23.307848920918044
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: None
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 28.270878365120332
- type: mrr
value: 29.057926505909254
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: None
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 1.855
- type: map_at_10
value: 3.582
- type: map_at_100
value: 4.694
- type: map_at_1000
value: 5.739
- type: map_at_3
value: 2.677
- type: map_at_5
value: 3.1
- type: mrr_at_1
value: 18.884999999999998
- type: mrr_at_10
value: 27.256999999999998
- type: mrr_at_100
value: 28.327999999999996
- type: mrr_at_1000
value: 28.402
- type: mrr_at_3
value: 24.2
- type: mrr_at_5
value: 26.011
- type: ndcg_at_1
value: 17.957
- type: ndcg_at_10
value: 14.051
- type: ndcg_at_100
value: 14.282
- type: ndcg_at_1000
value: 24.3
- type: ndcg_at_3
value: 15.478
- type: ndcg_at_5
value: 14.782
- type: precision_at_1
value: 18.884999999999998
- type: precision_at_10
value: 10.743
- type: precision_at_100
value: 4.449
- type: precision_at_1000
value: 1.7670000000000001
- type: precision_at_3
value: 14.654
- type: precision_at_5
value: 12.940999999999999
- type: recall_at_1
value: 1.855
- type: recall_at_10
value: 6.861000000000001
- type: recall_at_100
value: 18.044
- type: recall_at_1000
value: 52.712
- type: recall_at_3
value: 3.3369999999999997
- type: recall_at_5
value: 4.562
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: None
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 4.881
- type: map_at_10
value: 8.241999999999999
- type: map_at_100
value: 8.956999999999999
- type: map_at_1000
value: 9.062000000000001
- type: map_at_3
value: 6.981
- type: map_at_5
value: 7.61
- type: mrr_at_1
value: 5.5329999999999995
- type: mrr_at_10
value: 9.184000000000001
- type: mrr_at_100
value: 9.918000000000001
- type: mrr_at_1000
value: 10.018
- type: mrr_at_3
value: 7.836
- type: mrr_at_5
value: 8.518
- type: ndcg_at_1
value: 5.5329999999999995
- type: ndcg_at_10
value: 10.554
- type: ndcg_at_100
value: 14.341999999999999
- type: ndcg_at_1000
value: 17.458000000000002
- type: ndcg_at_3
value: 7.8759999999999994
- type: ndcg_at_5
value: 9.023
- type: precision_at_1
value: 5.5329999999999995
- type: precision_at_10
value: 1.944
- type: precision_at_100
value: 0.411
- type: precision_at_1000
value: 0.07100000000000001
- type: precision_at_3
value: 3.669
- type: precision_at_5
value: 2.8160000000000003
- type: recall_at_1
value: 4.881
- type: recall_at_10
value: 16.898
- type: recall_at_100
value: 34.625
- type: recall_at_1000
value: 58.901
- type: recall_at_3
value: 9.651
- type: recall_at_5
value: 12.35
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: None
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 53.159
- type: map_at_10
value: 64.053
- type: map_at_100
value: 64.938
- type: map_at_1000
value: 64.994
- type: map_at_3
value: 61.413
- type: map_at_5
value: 62.966
- type: mrr_at_1
value: 61.129999999999995
- type: mrr_at_10
value: 68.84400000000001
- type: mrr_at_100
value: 69.3
- type: mrr_at_1000
value: 69.319
- type: mrr_at_3
value: 67.113
- type: mrr_at_5
value: 68.162
- type: ndcg_at_1
value: 61.160000000000004
- type: ndcg_at_10
value: 68.944
- type: ndcg_at_100
value: 72.10499999999999
- type: ndcg_at_1000
value: 73.046
- type: ndcg_at_3
value: 65.223
- type: ndcg_at_5
value: 67.05
- type: precision_at_1
value: 61.160000000000004
- type: precision_at_10
value: 10.392999999999999
- type: precision_at_100
value: 1.327
- type: precision_at_1000
value: 0.149
- type: precision_at_3
value: 28.13
- type: precision_at_5
value: 18.656
- type: recall_at_1
value: 53.159
- type: recall_at_10
value: 78.412
- type: recall_at_100
value: 91.399
- type: recall_at_1000
value: 97.52
- type: recall_at_3
value: 67.794
- type: recall_at_5
value: 72.801
- type: map_at_1
value: 1.8450000000000002
- type: map_at_10
value: 4.172
- type: map_at_100
value: 5.092
- type: map_at_1000
value: 5.3100000000000005
- type: map_at_3
value: 3.093
- type: map_at_5
value: 3.6450000000000005
- type: mrr_at_1
value: 9.1
- type: mrr_at_10
value: 15.15
- type: mrr_at_100
value: 16.216
- type: mrr_at_1000
value: 16.332
- type: mrr_at_3
value: 12.55
- type: mrr_at_5
value: 13.975000000000001
- type: ndcg_at_1
value: 9.1
- type: ndcg_at_10
value: 8.065999999999999
- type: ndcg_at_100
value: 12.982
- type: ndcg_at_1000
value: 18.046
- type: ndcg_at_3
value: 7.295999999999999
- type: ndcg_at_5
value: 6.572
- type: precision_at_1
value: 9.1
- type: precision_at_10
value: 4.29
- type: precision_at_100
value: 1.16
- type: precision_at_1000
value: 0.23900000000000002
- type: precision_at_3
value: 6.833
- type: precision_at_5
value: 5.88
- type: recall_at_1
value: 1.8450000000000002
- type: recall_at_10
value: 8.706999999999999
- type: recall_at_100
value: 23.645
- type: recall_at_1000
value: 48.597
- type: recall_at_3
value: 4.175
- type: recall_at_5
value: 5.973
- type: map_at_1
value: 0.058
- type: map_at_10
value: 0.445
- type: map_at_100
value: 2.489
- type: map_at_1000
value: 6.3100000000000005
- type: map_at_3
value: 0.16999999999999998
- type: map_at_5
value: 0.254
- type: mrr_at_1
value: 32.0
- type: mrr_at_10
value: 46.016
- type: mrr_at_100
value: 46.683
- type: mrr_at_1000
value: 46.719
- type: mrr_at_3
value: 41.667
- type: mrr_at_5
value: 42.967
- type: ndcg_at_1
value: 26.0
- type: ndcg_at_10
value: 29.885
- type: ndcg_at_100
value: 22.958000000000002
- type: ndcg_at_1000
value: 22.244
- type: ndcg_at_3
value: 29.787999999999997
- type: ndcg_at_5
value: 29.494999999999997
- type: precision_at_1
value: 32.0
- type: precision_at_10
value: 33.800000000000004
- type: precision_at_100
value: 24.52
- type: precision_at_1000
value: 11.196
- type: precision_at_3
value: 35.333
- type: precision_at_5
value: 34.0
- type: recall_at_1
value: 0.058
- type: recall_at_10
value: 0.657
- type: recall_at_100
value: 5.069
- type: recall_at_1000
value: 22.447
- type: recall_at_3
value: 0.2
- type: recall_at_5
value: 0.32299999999999995
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: None
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 30.140589231842256
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: None
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 39.92770613505385
- task:
type: STS
dataset:
name: MTEB SICK-R
type: None
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 75.59024815989618
- type: cos_sim_spearman
value: 68.11624653233133
- type: euclidean_pearson
value: 73.27920094980502
- type: euclidean_spearman
value: 68.11632959681863
- type: manhattan_pearson
value: 72.54935141266294
- type: manhattan_spearman
value: 67.12457070604133
- task:
type: STS
dataset:
name: MTEB STS12
type: None
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 69.40126270570799
- type: cos_sim_spearman
value: 62.14207404840335
- type: euclidean_pearson
value: 66.27602017682412
- type: euclidean_spearman
value: 62.143384728461314
- type: manhattan_pearson
value: 67.07706053549664
- type: manhattan_spearman
value: 63.06497657163255
- task:
type: STS
dataset:
name: MTEB STS13
type: None
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 75.5989515866992
- type: cos_sim_spearman
value: 77.15211512453997
- type: euclidean_pearson
value: 76.70296919445704
- type: euclidean_spearman
value: 77.15215294384531
- type: manhattan_pearson
value: 77.00183340244841
- type: manhattan_spearman
value: 77.54347126493187
- task:
type: STS
dataset:
name: MTEB STS14
type: None
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 73.76592708615566
- type: cos_sim_spearman
value: 70.57102535486983
- type: euclidean_pearson
value: 73.16493844323281
- type: euclidean_spearman
value: 70.57101566858893
- type: manhattan_pearson
value: 73.3644832097739
- type: manhattan_spearman
value: 70.93527541966915
- task:
type: STS
dataset:
name: MTEB STS15
type: None
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 75.95076880553377
- type: cos_sim_spearman
value: 77.68458699868269
- type: euclidean_pearson
value: 77.7470713475935
- type: euclidean_spearman
value: 77.6845933113232
- type: manhattan_pearson
value: 78.19369618957612
- type: manhattan_spearman
value: 78.11088657087784
- task:
type: STS
dataset:
name: MTEB STS16
type: None
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 71.9715763028299
- type: cos_sim_spearman
value: 73.53220647955904
- type: euclidean_pearson
value: 73.57406594330985
- type: euclidean_spearman
value: 73.53303581777323
- type: manhattan_pearson
value: 74.03967460920595
- type: manhattan_spearman
value: 74.05778553630698
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: None
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 78.73667148725723
- type: cos_sim_spearman
value: 80.81028828869353
- type: euclidean_pearson
value: 81.15810431179573
- type: euclidean_spearman
value: 80.81116429309112
- type: manhattan_pearson
value: 81.55719120035107
- type: manhattan_spearman
value: 81.20882260152872
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: None
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 61.43534524580482
- type: cos_sim_spearman
value: 59.839157733781434
- type: euclidean_pearson
value: 61.83093863698779
- type: euclidean_spearman
value: 59.839157733781434
- type: manhattan_pearson
value: 62.55988010471628
- type: manhattan_spearman
value: 60.30306061143011
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: None
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 72.25188934839379
- type: cos_sim_spearman
value: 70.9113050369473
- type: euclidean_pearson
value: 72.68710352046212
- type: euclidean_spearman
value: 70.9113534378691
- type: manhattan_pearson
value: 73.09745859415004
- type: manhattan_spearman
value: 71.26505067192102
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: None
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 67.5036392977626
- type: mrr
value: 87.43891003694925
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: None
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 20.889
- type: map_at_10
value: 27.165
- type: map_at_100
value: 28.368
- type: map_at_1000
value: 28.483999999999998
- type: map_at_3
value: 25.180999999999997
- type: map_at_5
value: 26.269
- type: mrr_at_1
value: 22.0
- type: mrr_at_10
value: 28.512999999999998
- type: mrr_at_100
value: 29.531000000000002
- type: mrr_at_1000
value: 29.635
- type: mrr_at_3
value: 26.611
- type: mrr_at_5
value: 27.594
- type: ndcg_at_1
value: 22.0
- type: ndcg_at_10
value: 30.814000000000004
- type: ndcg_at_100
value: 36.647999999999996
- type: ndcg_at_1000
value: 39.81
- type: ndcg_at_3
value: 26.845999999999997
- type: ndcg_at_5
value: 28.677999999999997
- type: precision_at_1
value: 22.0
- type: precision_at_10
value: 4.5
- type: precision_at_100
value: 0.773
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 10.778
- type: precision_at_5
value: 7.5329999999999995
- type: recall_at_1
value: 20.889
- type: recall_at_10
value: 40.861
- type: recall_at_100
value: 68.089
- type: recall_at_1000
value: 93.05
- type: recall_at_3
value: 30.083
- type: recall_at_5
value: 34.556
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: None
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.47524752475248
- type: cos_sim_ap
value: 75.756486791625
- type: cos_sim_f1
value: 70.0162074554295
- type: cos_sim_precision
value: 76.14571092831962
- type: cos_sim_recall
value: 64.8
- type: dot_accuracy
value: 99.47524752475248
- type: dot_ap
value: 75.756486791625
- type: dot_f1
value: 70.0162074554295
- type: dot_precision
value: 76.14571092831962
- type: dot_recall
value: 64.8
- type: euclidean_accuracy
value: 99.47524752475248
- type: euclidean_ap
value: 75.756486791625
- type: euclidean_f1
value: 70.0162074554295
- type: euclidean_precision
value: 76.14571092831962
- type: euclidean_recall
value: 64.8
- type: manhattan_accuracy
value: 99.53069306930693
- type: manhattan_ap
value: 78.93311079752957
- type: manhattan_f1
value: 72.61292166952545
- type: manhattan_precision
value: 84.77970627503338
- type: manhattan_recall
value: 63.5
- type: max_accuracy
value: 99.53069306930693
- type: max_ap
value: 78.93311079752957
- type: max_f1
value: 72.61292166952545
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: None
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 38.956591584917824
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: None
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 28.829387041051085
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: None
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 41.618168302388256
- type: mrr
value: 42.031210211357276
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: None
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.716182681356333
- type: cos_sim_spearman
value: 28.852160879670087
- type: dot_pearson
value: 29.716182648715844
- type: dot_spearman
value: 28.951026187665967
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: None
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.157
- type: map_at_10
value: 6.787999999999999
- type: map_at_100
value: 9.948
- type: map_at_1000
value: 11.331
- type: map_at_3
value: 4.642
- type: map_at_5
value: 5.718999999999999
- type: mrr_at_1
value: 28.571
- type: mrr_at_10
value: 39.195
- type: mrr_at_100
value: 40.778999999999996
- type: mrr_at_1000
value: 40.797
- type: mrr_at_3
value: 36.394999999999996
- type: mrr_at_5
value: 38.129000000000005
- type: ndcg_at_1
value: 28.571
- type: ndcg_at_10
value: 17.936
- type: ndcg_at_100
value: 26.552999999999997
- type: ndcg_at_1000
value: 38.318000000000005
- type: ndcg_at_3
value: 24.192
- type: ndcg_at_5
value: 21.732000000000003
- type: precision_at_1
value: 28.571
- type: precision_at_10
value: 14.285999999999998
- type: precision_at_100
value: 5.489999999999999
- type: precision_at_1000
value: 1.2710000000000001
- type: precision_at_3
value: 24.490000000000002
- type: precision_at_5
value: 20.816000000000003
- type: recall_at_1
value: 2.157
- type: recall_at_10
value: 9.729000000000001
- type: recall_at_100
value: 32.688
- type: recall_at_1000
value: 69.123
- type: recall_at_3
value: 5.26
- type: recall_at_5
value: 7.109
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: None
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 67.9134
- type: ap
value: 12.774220384041032
- type: f1
value: 52.153059662642434
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: None
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 53.613469156762875
- type: f1
value: 53.786522868566145
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: None
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 30.747359446594245
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: None
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.97806520832091
- type: cos_sim_ap
value: 66.35427447671117
- type: cos_sim_f1
value: 63.0426851514046
- type: cos_sim_precision
value: 58.47056169636815
- type: cos_sim_recall
value: 68.3905013192612
- type: dot_accuracy
value: 83.97806520832091
- type: dot_ap
value: 66.35427447671117
- type: dot_f1
value: 63.0426851514046
- type: dot_precision
value: 58.47056169636815
- type: dot_recall
value: 68.3905013192612
- type: euclidean_accuracy
value: 83.97806520832091
- type: euclidean_ap
value: 66.35427447671117
- type: euclidean_f1
value: 63.0426851514046
- type: euclidean_precision
value: 58.47056169636815
- type: euclidean_recall
value: 68.3905013192612
- type: manhattan_accuracy
value: 83.97210466710378
- type: manhattan_ap
value: 65.97618382203181
- type: manhattan_f1
value: 62.53991648243675
- type: manhattan_precision
value: 58.501838235294116
- type: manhattan_recall
value: 67.17678100263852
- type: max_accuracy
value: 83.97806520832091
- type: max_ap
value: 66.35427447671117
- type: max_f1
value: 63.0426851514046
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: None
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 86.71362595567975
- type: cos_sim_ap
value: 80.86796720185393
- type: cos_sim_f1
value: 73.24097703244622
- type: cos_sim_precision
value: 69.5540783824955
- type: cos_sim_recall
value: 77.34062211271944
- type: dot_accuracy
value: 86.71362595567975
- type: dot_ap
value: 80.86797238493406
- type: dot_f1
value: 73.24097703244622
- type: dot_precision
value: 69.5540783824955
- type: dot_recall
value: 77.34062211271944
- type: euclidean_accuracy
value: 86.71362595567975
- type: euclidean_ap
value: 80.86796690301992
- type: euclidean_f1
value: 73.24097703244622
- type: euclidean_precision
value: 69.5540783824955
- type: euclidean_recall
value: 77.34062211271944
- type: manhattan_accuracy
value: 86.64376916210657
- type: manhattan_ap
value: 80.8520473693602
- type: manhattan_f1
value: 73.15887850467291
- type: manhattan_precision
value: 71.10158407208255
- type: manhattan_recall
value: 75.33877425315676
- type: max_accuracy
value: 86.71362595567975
- type: max_ap
value: 80.86797238493406
- type: max_f1
value: 73.24097703244622
---
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
mradermacher/airoboros-34b-3.3-i1-GGUF
|
mradermacher
| null |
[
"transformers",
"gguf",
"en",
"dataset:jondurbin/airoboros-3.2",
"dataset:bluemoon-fandom-1-1-rp-cleaned",
"dataset:boolq",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/cinematika-v0.1",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:grimulkan/LimaRP-augmented",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:mattpscott/airoboros-summarization",
"dataset:unalignment/toxic-dpo-v0.2",
"base_model:jondurbin/airoboros-34b-3.3",
"base_model:quantized:jondurbin/airoboros-34b-3.3",
"license:other",
"endpoints_compatible",
"region:us"
] | 1,712,112,742,000 | 2024-05-06T05:21:32 | 490 | 1 |
---
base_model: jondurbin/airoboros-34b-3.3
datasets:
- jondurbin/airoboros-3.2
- bluemoon-fandom-1-1-rp-cleaned
- boolq
- jondurbin/gutenberg-dpo-v0.1
- LDJnr/Capybara
- jondurbin/cinematika-v0.1
- glaiveai/glaive-function-calling-v2
- grimulkan/LimaRP-augmented
- piqa
- Vezora/Tested-22k-Python-Alpaca
- mattpscott/airoboros-summarization
- unalignment/toxic-dpo-v0.2
language:
- en
library_name: transformers
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
quantized_by: mradermacher
---
## About
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/jondurbin/airoboros-34b-3.3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ1_S.gguf) | i1-IQ1_S | 8.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ1_M.gguf) | i1-IQ1_M | 8.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ2_S.gguf) | i1-IQ2_S | 11.6 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ2_M.gguf) | i1-IQ2_M | 12.5 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q2_K.gguf) | i1-Q2_K | 13.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 14.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ3_S.gguf) | i1-IQ3_S | 15.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ3_M.gguf) | i1-IQ3_M | 16.2 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 19.1 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q4_0.gguf) | i1-Q4_0 | 20.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 20.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 21.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 24.3 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 25.0 | |
| [GGUF](https://huggingface.co/mradermacher/airoboros-34b-3.3-i1-GGUF/resolve/main/airoboros-34b-3.3.i1-Q6_K.gguf) | i1-Q6_K | 28.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
YxBxRyXJx/bge-base-financial-matryoshka
|
YxBxRyXJx
|
sentence-similarity
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5600",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,731,665,903,000 | 2024-11-15T10:19:00 | 6 | 0 |
---
base_model: BAAI/bge-base-en-v1.5
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5600
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: The Federal Energy Regulatory Commission (“FERC”) has also taken
steps to enable the participation of energy storage in wholesale energy markets.
sentences:
- What segment-specific regulations apply to CVS Health Corporation's Pharmacy &
Consumer Wellness segment?
- What types of contracts does the company have for its health insurance plans,
and how does premium revenue recognition function under these contracts?
- What federal agency has taken steps to facilitate energy storage participation
in wholesale energy markets?
- source_sentence: Investments in subsidiaries and partnerships which we do not control
but have significant influence are accounted for under the equity method.
sentences:
- How does the company aim to protect the health and well-being of the communities
it operates in?
- What are the key factors affecting the evaluation of the Economic Value of Equity
(EVE) at the Charles Schwab Corporation?
- What accounting method does the company use to account for investments in subsidiaries
and partnerships where it does not control but has significant influence?
- source_sentence: Item 8 of IBM's 2023 Annual Report includes financial statements
and supplementary data spanning pages 44 through 121.
sentences:
- What entities are included among the Guarantors that guarantee each other’s debt
securities as described in Comcast’s 2023 Annual Report?
- What uncertainties exist regarding projections of future cash needs and cash flows?
- How many pages in IBM's 2023 Annual Report to Stockholders are dedicated to financial
statements and supplementary data?
- source_sentence: 'Our compensation philosophy creates the framework for our rewards
strategy, which focuses on five key elements: pay-for-performance, external market-based
research, internal equity, fiscal responsibility, and legal compliance.'
sentences:
- What financial instruments does the company invest in that are sensitive to interest
rates?
- What elements are included in the company's compensation programs?
- What is the expected maximum potential loss from hurricane events for Chubb as
of the end of 2023?
- source_sentence: Outside of the U.S., many countries have established vehicle safety
standards and regulations and are likely to adopt additional, more stringent requirements
in the future.
sentences:
- What percentage of the company's sales categories in fiscal 2023 were failure
and maintenance related?
- What competitive factors influence Chubb International's international operations?
- What changes are occurring with vehicle safety regulations outside of the U.S.?
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.6885714285714286
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8278571428571428
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8728571428571429
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9164285714285715
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6885714285714286
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.275952380952381
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17457142857142854
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09164285714285714
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6885714285714286
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8278571428571428
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8728571428571429
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9164285714285715
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8042449175537354
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.768181405895692
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7712863400405022
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.6864285714285714
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8292857142857143
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8728571428571429
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9135714285714286
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6864285714285714
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2764285714285714
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17457142857142854
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09135714285714285
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6864285714285714
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8292857142857143
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8728571428571429
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9135714285714286
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8024352620004916
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7665753968253971
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7697268174707245
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.68
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.825
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8635714285714285
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9042857142857142
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.68
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.275
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1727142857142857
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09042857142857141
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.68
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.825
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8635714285714285
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9042857142857142
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7955058944909328
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7603066893424041
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7637281364444245
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.6621428571428571
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7964285714285714
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8457142857142858
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8907142857142857
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6621428571428571
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2654761904761905
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16914285714285712
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08907142857142857
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6621428571428571
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7964285714285714
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8457142857142858
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8907142857142857
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7772894744328753
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7408999433106581
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7449491476160666
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.6285714285714286
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7635714285714286
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8057142857142857
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8642857142857143
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6285714285714286
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2545238095238095
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16114285714285712
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08642857142857142
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6285714285714286
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7635714285714286
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8057142857142857
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8642857142857143
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7447153698860624
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7067037981859416
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7112341263725279
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("YxBxRyXJx/bge-base-financial-matryoshka")
# Run inference
sentences = [
'Outside of the U.S., many countries have established vehicle safety standards and regulations and are likely to adopt additional, more stringent requirements in the future.',
'What changes are occurring with vehicle safety regulations outside of the U.S.?',
"What competitive factors influence Chubb International's international operations?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.6886 | 0.6864 | 0.68 | 0.6621 | 0.6286 |
| cosine_accuracy@3 | 0.8279 | 0.8293 | 0.825 | 0.7964 | 0.7636 |
| cosine_accuracy@5 | 0.8729 | 0.8729 | 0.8636 | 0.8457 | 0.8057 |
| cosine_accuracy@10 | 0.9164 | 0.9136 | 0.9043 | 0.8907 | 0.8643 |
| cosine_precision@1 | 0.6886 | 0.6864 | 0.68 | 0.6621 | 0.6286 |
| cosine_precision@3 | 0.276 | 0.2764 | 0.275 | 0.2655 | 0.2545 |
| cosine_precision@5 | 0.1746 | 0.1746 | 0.1727 | 0.1691 | 0.1611 |
| cosine_precision@10 | 0.0916 | 0.0914 | 0.0904 | 0.0891 | 0.0864 |
| cosine_recall@1 | 0.6886 | 0.6864 | 0.68 | 0.6621 | 0.6286 |
| cosine_recall@3 | 0.8279 | 0.8293 | 0.825 | 0.7964 | 0.7636 |
| cosine_recall@5 | 0.8729 | 0.8729 | 0.8636 | 0.8457 | 0.8057 |
| cosine_recall@10 | 0.9164 | 0.9136 | 0.9043 | 0.8907 | 0.8643 |
| **cosine_ndcg@10** | **0.8042** | **0.8024** | **0.7955** | **0.7773** | **0.7447** |
| cosine_mrr@10 | 0.7682 | 0.7666 | 0.7603 | 0.7409 | 0.7067 |
| cosine_map@100 | 0.7713 | 0.7697 | 0.7637 | 0.7449 | 0.7112 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 5,600 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 44.34 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 20.46 tokens</li><li>max: 46 tokens</li></ul> |
* Samples:
| positive | anchor |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Z-net is AutoZone's proprietary electronic catalog and enables AutoZoners to efficiently look up parts that customers need, providing complete job solutions and information based on vehicle specifics. It also tracks inventory availability across different locations.</code> | <code>What is the purpose of Z-net in AutoZone stores?</code> |
| <code>In 2023, the allowance for loan and lease losses was $13.3 billion on total loans and leases of $1,050.2 billion, which excludes loans accounted for under the fair value option.</code> | <code>What was the total amount of loans and leases at Bank of America by the end of 2023, excluding those accounted for under the fair value option?</code> |
| <code>We significantly improved features in Service Manager™, which installers can use from their mobile devices to get service instantly. We continue to provide 24/7 support for installers and Enphase system owners globally across our phone, online chat, and email communications channel. We continue to train our customer service agents with a goal of reducing average customer wait times to under one minute, and we continue to expand our network of field service technicians in the United States, Europe and Australia to provide direct homeowner assistance.</code> | <code>What measures has Enphase Energy, Inc. taken to improve customer service in 2023?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 2
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:----------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 0.9143 | 10 | 1.4537 | 0.7992 | 0.7952 | 0.7900 | 0.7703 | 0.7350 |
| **1.8286** | **20** | **0.6857** | **0.8042** | **0.8024** | **0.7955** | **0.7773** | **0.7447** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.0
- Transformers: 4.46.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
nahyeonkang/ai.keepit
|
nahyeonkang
|
text-classification
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:nsmc",
"base_model:beomi/kcbert-base",
"base_model:finetune:beomi/kcbert-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,691,079,661,000 | 2023-08-03T17:56:35 | 13 | 0 |
---
base_model: beomi/kcbert-base
datasets:
- nsmc
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: ai.keepit
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: nsmc
type: nsmc
config: default
split: test
args: default
metrics:
- type: accuracy
value: 0.90204
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai.keepit
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the nsmc dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3046
- Accuracy: 0.9020
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2715 | 1.0 | 9375 | 0.2604 | 0.8957 |
| 0.2137 | 2.0 | 18750 | 0.2677 | 0.9003 |
| 0.1655 | 3.0 | 28125 | 0.3046 | 0.9020 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
google/t5-large-lm-adapt
|
google
|
text2text-generation
|
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"t5-lm-adapt",
"en",
"dataset:c4",
"arxiv:2002.05202",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2023-01-24T16:52:08 | 2,748 | 8 |
---
datasets:
- c4
language: en
license: apache-2.0
tags:
- t5-lm-adapt
---
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1 - LM-Adapted
## Version 1.1 - LM-Adapted
[T5 Version 1.1 - LM Adapted](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) includes the following improvements compared to the original [T5 model](https://huggingface.co/t5-large):
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202).
- Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.
- Pre-trained on C4 only without mixing in the downstream tasks.
- no parameter sharing between embedding and classifier layer
- "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`.
and is pretrained on both the denoising and language modeling objective.
More specifically, this checkpoint is initialized from [T5 Version 1.1 - Large](https://huggingface.co/google/https://huggingface.co/google/t5-v1_1-large)
and then trained for an additional 100K steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf).
This adaptation improves the ability of the model to be used for prompt tuning.
**Note**: A popular fine-tuned version of the *T5 Version 1.1 - LM Adapted* model is [BigScience's T0pp](https://huggingface.co/bigscience/T0pp).
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Other Community Checkpoints: [here](https://huggingface.co/models?other=t5-lm-adapt)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

|
[
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING",
"SUMMARIZATION"
] |
Non_BioNLP
|
ahmeddbahaa/mT5_multilingual_XLSum-finetuned-fa-finetuned-ar
|
ahmeddbahaa
|
summarization
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"Abstractive Summarization",
"ar",
"generated_from_trainer",
"dataset:xlsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,654,705,438,000 | 2022-06-08T22:22:19 | 26 | 1 |
---
datasets:
- xlsum
tags:
- mt5
- summarization
- Abstractive Summarization
- ar
- generated_from_trainer
model-index:
- name: mT5_multilingual_XLSum-finetuned-fa-finetuned-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mT5_multilingual_XLSum-finetuned-fa-finetuned-ar
This model is a fine-tuned version of [ahmeddbahaa/mT5_multilingual_XLSum-finetuned-fa](https://huggingface.co/ahmeddbahaa/mT5_multilingual_XLSum-finetuned-fa) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6352
- Rouge-1: 28.69
- Rouge-2: 11.6
- Rouge-l: 24.29
- Gen Len: 41.37
- Bertscore: 73.37
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- num_epochs: 5
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
[
"SUMMARIZATION"
] |
Non_BioNLP
|
gokuls/bert_uncased_L-10_H-768_A-12_emotion
|
gokuls
|
text-classification
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:google/bert_uncased_L-10_H-768_A-12",
"base_model:finetune:google/bert_uncased_L-10_H-768_A-12",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,696,611,110,000 | 2023-10-06T16:59:05 | 7 | 0 |
---
base_model: google/bert_uncased_L-10_H-768_A-12
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert_uncased_L-10_H-768_A-12_emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- type: accuracy
value: 0.941
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_uncased_L-10_H-768_A-12_emotion
This model is a fine-tuned version of [google/bert_uncased_L-10_H-768_A-12](https://huggingface.co/google/bert_uncased_L-10_H-768_A-12) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2108
- Accuracy: 0.941
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 33
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4839 | 1.0 | 250 | 0.1626 | 0.9375 |
| 0.1446 | 2.0 | 500 | 0.1273 | 0.938 |
| 0.1018 | 3.0 | 750 | 0.1331 | 0.9375 |
| 0.0835 | 4.0 | 1000 | 0.1562 | 0.9395 |
| 0.0688 | 5.0 | 1250 | 0.1724 | 0.94 |
| 0.0487 | 6.0 | 1500 | 0.2108 | 0.941 |
| 0.0315 | 7.0 | 1750 | 0.2439 | 0.9375 |
| 0.0201 | 8.0 | 2000 | 0.2511 | 0.9395 |
| 0.0128 | 9.0 | 2250 | 0.2772 | 0.934 |
| 0.0086 | 10.0 | 2500 | 0.2811 | 0.939 |
### Framework versions
- Transformers 4.34.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.14.5
- Tokenizers 0.14.1
|
[
"TEXT_CLASSIFICATION"
] |
Non_BioNLP
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.