modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-08 12:29:11
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 493
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-08 12:28:45
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Tashiroksksks/EVELLY-LORA
|
Tashiroksksks
| 2025-04-29T17:11:03Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-04-29T16:36:36Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
robiulawaldev/ae7a21fe-c0bb-41b5-af61-ae6aa08e8d19
|
robiulawaldev
| 2025-04-29T17:10:28Z | 0 | 0 |
transformers
|
[
"transformers",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T17:10:03Z |
---
library_name: transformers
model_name: robiulawaldev/ae7a21fe-c0bb-41b5-af61-ae6aa08e8d19
tags:
- generated_from_trainer
licence: license
---
# Model Card for robiulawaldev/ae7a21fe-c0bb-41b5-af61-ae6aa08e8d19
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.3
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
dzanbek/fcc33383-cfbb-4a4d-9e53-9e61d25c9e6c
|
dzanbek
| 2025-04-29T17:05:54Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:adapter:EleutherAI/pythia-70m",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T17:03:07Z |
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fcc33383-cfbb-4a4d-9e53-9e61d25c9e6c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: EleutherAI/pythia-70m
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 061a0d3c8a6ab8e5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/061a0d3c8a6ab8e5_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: dzanbek/fcc33383-cfbb-4a4d-9e53-9e61d25c9e6c
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/061a0d3c8a6ab8e5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b2017d8a-30cc-48dc-9b42-4b221d786cc3
wandb_project: s56-2
wandb_run: your_name
wandb_runid: b2017d8a-30cc-48dc-9b42-4b221d786cc3
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# fcc33383-cfbb-4a4d-9e53-9e61d25c9e6c
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1350
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.1517 | 0.0191 | 200 | 3.1350 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
anyai/Swallow-7b-hf-oasst1-21k-ja-aio-retriever
|
anyai
| 2025-04-29T17:04:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T16:57:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
joboffer/8ebb93e2-6cbd-4b6b-a766-f7b9477c23f0
|
joboffer
| 2025-04-29T17:04:12Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:adapter:EleutherAI/pythia-70m",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T17:03:22Z |
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8ebb93e2-6cbd-4b6b-a766-f7b9477c23f0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 061a0d3c8a6ab8e5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/061a0d3c8a6ab8e5_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: joboffer/8ebb93e2-6cbd-4b6b-a766-f7b9477c23f0
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/061a0d3c8a6ab8e5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b2017d8a-30cc-48dc-9b42-4b221d786cc3
wandb_project: s56-33
wandb_run: your_name
wandb_runid: b2017d8a-30cc-48dc-9b42-4b221d786cc3
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8ebb93e2-6cbd-4b6b-a766-f7b9477c23f0
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.6609 | 0.0191 | 200 | 3.6109 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
vermoney/ab1d118f-299c-450a-9e2f-42a14f119465
|
vermoney
| 2025-04-29T17:03:30Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"base_model:adapter:EleutherAI/pythia-70m",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T17:02:36Z |
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-70m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ab1d118f-299c-450a-9e2f-42a14f119465
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-70m
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 061a0d3c8a6ab8e5_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/061a0d3c8a6ab8e5_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vermoney/ab1d118f-299c-450a-9e2f-42a14f119465
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/061a0d3c8a6ab8e5_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b2017d8a-30cc-48dc-9b42-4b221d786cc3
wandb_project: s56-9
wandb_run: your_name
wandb_runid: b2017d8a-30cc-48dc-9b42-4b221d786cc3
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# ab1d118f-299c-450a-9e2f-42a14f119465
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5810
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.6138 | 0.0191 | 200 | 3.5810 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
MeeraRamteke/Elon
|
MeeraRamteke
| 2025-04-29T17:02:28Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T17:02:28Z |
---
license: apache-2.0
---
|
Rajupwd444/Raju
|
Rajupwd444
| 2025-04-29T17:01:04Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T17:01:04Z |
---
license: apache-2.0
---
|
omsh97/Industry_Project_v3
|
omsh97
| 2025-04-29T16:59:28Z | 0 | 0 | null |
[
"gguf",
"mistral",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T12:01:24Z |
---
license: apache-2.0
---
|
PR0G3T/Reinforce-CartPole-v1
|
PR0G3T
| 2025-04-29T16:58:27Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-04-29T16:58:15Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
tomaarsen/splade-ModernBERT-nq-fresh
|
tomaarsen
| 2025-04-29T16:58:01Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"sparse-encoder",
"generated_from_trainer",
"dataset_size:99000",
"loss:SpladeLoss",
"feature-extraction",
"en",
"dataset:sentence-transformers/natural-questions",
"arxiv:1908.10084",
"arxiv:2503.01776",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-04-29T16:57:43Z |
---
language:
- en
tags:
- sentence-transformers
- sparse-encoder
- generated_from_trainer
- dataset_size:99000
- loss:SpladeLoss
widget:
- source_sentence: who are the dancers in the limp bizkit rollin video
sentences:
- Voting age Before the Second World War, the voting age in almost all countries
was 21 years or higher. Czechoslovakia was the first to reduce the voting age
to 20 years in 1946, and by 1968 a total of 17 countries had lowered their voting
age.[1] Many countries, particularly in Western Europe, reduced their voting ages
to 18 years during the 1970s, starting with the United Kingdom (1969),[2] with
the United States (26th Amendment) (1971), Canada, West Germany (1972), Australia
(1974), France (1974), and others following soon afterwards. By the end of the
20th century, 18 had become by far the most common voting age. However, a few
countries maintain a voting age of 20 years or higher. It was argued that young
men could be drafted to go to war at 18, and many people felt they should be able
to vote at the age of 18.[3]
- Rollin' (Limp Bizkit song) The music video was filmed atop the South Tower of
the former World Trade Center in New York City. The introduction features Ben
Stiller and Stephen Dorff mistaking Fred Durst for the valet and giving him the
keys to their Bentley Azure. Also making a cameo is break dancer Mr. Wiggles.
The rest of the video has several cuts to Durst and his bandmates hanging out
of the Bentley as they drive about Manhattan. The song Ben Stiller is playing
at the beginning is "My Generation" from the same album. The video also features
scenes of Fred Durst with five girls dancing in a room. The video was filmed around
the same time as the film Zoolander, which explains Stiller and Dorff's appearance.
Fred Durst has a small cameo in that film.
- Eobard Thawne When Thawne reappears, he murders the revived Johnny Quick,[9] before
proceeding to trap Barry and the revived Max Mercury inside the negative Speed
Force. Thawne then attempts to kill Wally West's children through their connection
to the Speed Force in front of Linda Park-West, only to be stopped by Jay Garrick
and Bart Allen. Thawne defeats Jay and prepares to kill Bart, but Barry, Max,
Wally, Jesse Quick, and Impulse arrive to prevent the villain from doing so.[8][10]
In the ensuing fight, Thawne reveals that he is responsible for every tragedy
that has occurred in Barry's life, including the death of his mother. Thawne then
decides to destroy everything the Flash holds dear by killing Barry's wife, Iris,
before they even met.[10]
- source_sentence: who wins season 14 of hell's kitchen
sentences:
- Hell's Kitchen (U.S. season 14) Season 14 of the American competitive reality
television series Hell's Kitchen premiered on March 3, 2015 on Fox. The prize
is a head chef position at Gordon Ramsay Pub & Grill in Caesars Atlantic City.[1]
Gordon Ramsay returned as head chef with Andi Van Willigan and James Avery returning
as sous-chefs for both their respective kitchens as well as Marino Monferrato
as the maître d'. Executive chef Meghan Gill from Roanoke, Virginia, won the
competition, thus becoming the fourteenth winner of Hell's Kitchen.
- 'Maze Runner: The Death Cure On April 22, 2017, the studio delayed the release
date once again, to February 9, 2018, in order to allow more time for post-production;
months later, on August 25, the studio moved the release forward two weeks.[17]
The film will premiere on January 26, 2018 in 3D, IMAX and IMAX 3D.[18][19]'
- North American Plate On its western edge, the Farallon Plate has been subducting
under the North American Plate since the Jurassic Period. The Farallon Plate has
almost completely subducted beneath the western portion of the North American
Plate leaving that part of the North American Plate in contact with the Pacific
Plate as the San Andreas Fault. The Juan de Fuca, Explorer, Gorda, Rivera, Cocos
and Nazca plates are remnants of the Farallon Plate.
- source_sentence: who played the dj in the movie the warriors
sentences:
- List of Arrow episodes As of May 17, 2018,[update] 138 episodes of Arrow have
aired, concluding the sixth season. On April 2, 2018, the CW renewed the series
for a seventh season.[1]
- Lynne Thigpen Cherlynne Theresa "Lynne" Thigpen (December 22, 1948 – March 12,
2003) was an American actress, best known for her role as "The Chief" of ACME
in the various Carmen Sandiego television series and computer games from 1991
to 1997. For her varied television work, Thigpen was nominated for six Daytime
Emmy Awards; she won a Tony Award in 1997 for portraying Dr. Judith Kaufman in
An American Daughter.
- The Washington Post The Washington Post is an American daily newspaper. It is
the most widely circulated newspaper published in Washington, D.C., and was founded
on December 6, 1877,[7] making it the area's oldest extant newspaper. In February
2017, amid a barrage of criticism from President Donald Trump over the paper's
coverage of his campaign and early presidency as well as concerns among the American
press about Trump's criticism and threats against journalists who provide coverage
he deems unfavorable, the Post adopted the slogan "Democracy Dies in Darkness".[8]
- source_sentence: how old was messi when he started his career
sentences:
- Lionel Messi Born and raised in central Argentina, Messi was diagnosed with a
growth hormone deficiency as a child. At age 13, he relocated to Spain to join
Barcelona, who agreed to pay for his medical treatment. After a fast progression
through Barcelona's youth academy, Messi made his competitive debut aged 17 in
October 2004. Despite being injury-prone during his early career, he established
himself as an integral player for the club within the next three years, finishing
2007 as a finalist for both the Ballon d'Or and FIFA World Player of the Year
award, a feat he repeated the following year. His first uninterrupted campaign
came in the 2008–09 season, during which he helped Barcelona achieve the first
treble in Spanish football. At 22 years old, Messi won the Ballon d'Or and FIFA
World Player of the Year award by record voting margins.
- We Are Marshall Filming of We Are Marshall commenced on April 3, 2006, in Huntington,
West Virginia, and was completed in Atlanta, Georgia. The premiere for the film
was held at the Keith Albee Theater on December 12, 2006, in Huntington; other
special screenings were held at Pullman Square. The movie was released nationwide
on December 22, 2006.
- One Fish, Two Fish, Red Fish, Blue Fish One Fish, Two Fish, Red Fish, Blue Fish
is a 1960 children's book by Dr. Seuss. It is a simple rhyming book for beginning
readers, with a freewheeling plot about a boy and a girl named Jay and Kay and
the many amazing creatures they have for friends and pets. Interspersed are some
rather surreal and unrelated skits, such as a man named Ned whose feet stick out
from his bed, and a creature who has a bird in his ear. As of 2001, over 6 million
copies of the book had been sold, placing it 13th on a list of "All-Time Bestselling
Children's Books" from Publishers Weekly.[1] Based on a 2007 online poll, the
United States' National Education Association labor union named the book one of
its "Teachers' Top 100 Books for Children."[2]
- source_sentence: is send in the clowns from a musical
sentences:
- Money in the Bank ladder match The first match was contested in 2005 at WrestleMania
21, after being invented (in kayfabe) by Chris Jericho.[1] At the time, it was
exclusive to wrestlers of the Raw brand, and Edge won the inaugural match.[1]
From then until 2010, the Money in the Bank ladder match, now open to all WWE
brands, became a WrestleMania mainstay. 2010 saw a second and third Money in the
Bank ladder match when the Money in the Bank pay-per-view debuted in July. Unlike
the matches at WrestleMania, this new event featured two such ladder matches –
one each for a contract for the WWE Championship and World Heavyweight Championship,
respectively.
- The Suite Life on Deck The Suite Life on Deck is an American sitcom that aired
on Disney Channel from September 26, 2008 to May 6, 2011. It is a sequel/spin-off
of the Disney Channel Original Series The Suite Life of Zack & Cody. The series
follows twin brothers Zack and Cody Martin and hotel heiress London Tipton in
a new setting, the SS Tipton, where they attend classes at "Seven Seas High School"
and meet Bailey Pickett while Mr. Moseby manages the ship. The ship travels around
the world to nations such as Italy, France, Greece, India, Sweden and the United
Kingdom where the characters experience different cultures, adventures, and situations.[1]
- 'Send In the Clowns "Send In the Clowns" is a song written by Stephen Sondheim
for the 1973 musical A Little Night Music, an adaptation of Ingmar Bergman''s
film Smiles of a Summer Night. It is a ballad from Act Two, in which the character
Desirée reflects on the ironies and disappointments of her life. Among other things,
she looks back on an affair years earlier with the lawyer Fredrik, who was deeply
in love with her but whose marriage proposals she had rejected. Meeting him after
so long, she realizes she is in love with him and finally ready to marry him,
but now it is he who rejects her: he is in an unconsummated marriage with a much
younger woman. Desirée proposes marriage to rescue him from this situation, but
he declines, citing his dedication to his bride. Reacting to his rejection, Desirée
sings this song. The song is later reprised as a coda after Fredrik''s young wife
runs away with his son, and Fredrik is finally free to accept Desirée''s offer.[1]'
datasets:
- sentence-transformers/natural-questions
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
co2_eq_emissions:
emissions: 65.75690749093074
energy_consumed: 0.16917048919462915
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.59
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: SparseEncoder
results:
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: dot_accuracy@1
value: 0.06
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.08
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.1
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.22
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.06
name: Dot Precision@1
- type: dot_precision@3
value: 0.026666666666666665
name: Dot Precision@3
- type: dot_precision@5
value: 0.02
name: Dot Precision@5
- type: dot_precision@10
value: 0.022000000000000002
name: Dot Precision@10
- type: dot_recall@1
value: 0.06
name: Dot Recall@1
- type: dot_recall@3
value: 0.08
name: Dot Recall@3
- type: dot_recall@5
value: 0.1
name: Dot Recall@5
- type: dot_recall@10
value: 0.22
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.11962859325699711
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.08988095238095237
name: Dot Mrr@10
- type: dot_map@100
value: 0.10836157721947347
name: Dot Map@100
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: dot_accuracy@1
value: 0.04
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.14
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.22
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.28
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.04
name: Dot Precision@1
- type: dot_precision@3
value: 0.05333333333333334
name: Dot Precision@3
- type: dot_precision@5
value: 0.05600000000000001
name: Dot Precision@5
- type: dot_precision@10
value: 0.039999999999999994
name: Dot Precision@10
- type: dot_recall@1
value: 0.0007272727272727272
name: Dot Recall@1
- type: dot_recall@3
value: 0.003485594847471253
name: Dot Recall@3
- type: dot_recall@5
value: 0.015079083137479745
name: Dot Recall@5
- type: dot_recall@10
value: 0.025913656492513457
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.046230356055562416
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.10252380952380952
name: Dot Mrr@10
- type: dot_map@100
value: 0.013574541484243996
name: Dot Map@100
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: dot_accuracy@1
value: 0.06
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.16
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.18
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.3
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.06
name: Dot Precision@1
- type: dot_precision@3
value: 0.05333333333333333
name: Dot Precision@3
- type: dot_precision@5
value: 0.036000000000000004
name: Dot Precision@5
- type: dot_precision@10
value: 0.030000000000000006
name: Dot Precision@10
- type: dot_recall@1
value: 0.06
name: Dot Recall@1
- type: dot_recall@3
value: 0.16
name: Dot Recall@3
- type: dot_recall@5
value: 0.18
name: Dot Recall@5
- type: dot_recall@10
value: 0.28
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.15945390133280277
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.12333333333333332
name: Dot Mrr@10
- type: dot_map@100
value: 0.1410012610991671
name: Dot Map@100
- task:
type: sparse-nano-beir
name: Sparse Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: dot_accuracy@1
value: 0.05333333333333334
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.12666666666666668
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.16666666666666666
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.26666666666666666
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.05333333333333334
name: Dot Precision@1
- type: dot_precision@3
value: 0.044444444444444446
name: Dot Precision@3
- type: dot_precision@5
value: 0.037333333333333336
name: Dot Precision@5
- type: dot_precision@10
value: 0.030666666666666665
name: Dot Precision@10
- type: dot_recall@1
value: 0.04024242424242424
name: Dot Recall@1
- type: dot_recall@3
value: 0.08116186494915709
name: Dot Recall@3
- type: dot_recall@5
value: 0.09835969437915992
name: Dot Recall@5
- type: dot_recall@10
value: 0.17530455216417118
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.10843761688178744
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.10524603174603174
name: Dot Mrr@10
- type: dot_map@100
value: 0.08764579326762818
name: Dot Map@100
---
# SparseEncoder
This is a [Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model trained on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 50368-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
## Model Details
### Model Description
- **Model Type:** Sparse Encoder
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 8192 tokens
- **Training Dataset:**
- [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("tomaarsen/splade-ModernBERT-nq-fresh")
# Run inference
sentences = [
'is send in the clowns from a musical',
'Send In the Clowns "Send In the Clowns" is a song written by Stephen Sondheim for the 1973 musical A Little Night Music, an adaptation of Ingmar Bergman\'s film Smiles of a Summer Night. It is a ballad from Act Two, in which the character Desirée reflects on the ironies and disappointments of her life. Among other things, she looks back on an affair years earlier with the lawyer Fredrik, who was deeply in love with her but whose marriage proposals she had rejected. Meeting him after so long, she realizes she is in love with him and finally ready to marry him, but now it is he who rejects her: he is in an unconsummated marriage with a much younger woman. Desirée proposes marriage to rescue him from this situation, but he declines, citing his dedication to his bride. Reacting to his rejection, Desirée sings this song. The song is later reprised as a coda after Fredrik\'s young wife runs away with his son, and Fredrik is finally free to accept Desirée\'s offer.[1]',
'The Suite Life on Deck The Suite Life on Deck is an American sitcom that aired on Disney Channel from September 26, 2008 to May 6, 2011. It is a sequel/spin-off of the Disney Channel Original Series The Suite Life of Zack & Cody. The series follows twin brothers Zack and Cody Martin and hotel heiress London Tipton in a new setting, the SS Tipton, where they attend classes at "Seven Seas High School" and meet Bailey Pickett while Mr. Moseby manages the ship. The ship travels around the world to nations such as Italy, France, Greece, India, Sweden and the United Kingdom where the characters experience different cultures, adventures, and situations.[1]',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# (3, 50368)
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Sparse Information Retrieval
* Datasets: `NanoMSMARCO`, `NanoNFCorpus` and `NanoNQ`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator)
| Metric | NanoMSMARCO | NanoNFCorpus | NanoNQ |
|:-----------------|:------------|:-------------|:-----------|
| dot_accuracy@1 | 0.06 | 0.04 | 0.06 |
| dot_accuracy@3 | 0.08 | 0.14 | 0.16 |
| dot_accuracy@5 | 0.1 | 0.22 | 0.18 |
| dot_accuracy@10 | 0.22 | 0.28 | 0.3 |
| dot_precision@1 | 0.06 | 0.04 | 0.06 |
| dot_precision@3 | 0.0267 | 0.0533 | 0.0533 |
| dot_precision@5 | 0.02 | 0.056 | 0.036 |
| dot_precision@10 | 0.022 | 0.04 | 0.03 |
| dot_recall@1 | 0.06 | 0.0007 | 0.06 |
| dot_recall@3 | 0.08 | 0.0035 | 0.16 |
| dot_recall@5 | 0.1 | 0.0151 | 0.18 |
| dot_recall@10 | 0.22 | 0.0259 | 0.28 |
| **dot_ndcg@10** | **0.1196** | **0.0462** | **0.1595** |
| dot_mrr@10 | 0.0899 | 0.1025 | 0.1233 |
| dot_map@100 | 0.1084 | 0.0136 | 0.141 |
#### Sparse Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>SparseNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco",
"nfcorpus",
"nq"
]
}
```
| Metric | Value |
|:-----------------|:-----------|
| dot_accuracy@1 | 0.0533 |
| dot_accuracy@3 | 0.1267 |
| dot_accuracy@5 | 0.1667 |
| dot_accuracy@10 | 0.2667 |
| dot_precision@1 | 0.0533 |
| dot_precision@3 | 0.0444 |
| dot_precision@5 | 0.0373 |
| dot_precision@10 | 0.0307 |
| dot_recall@1 | 0.0402 |
| dot_recall@3 | 0.0812 |
| dot_recall@5 | 0.0984 |
| dot_recall@10 | 0.1753 |
| **dot_ndcg@10** | **0.1084** |
| dot_mrr@10 | 0.1052 |
| dot_map@100 | 0.0876 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 99,000 training samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 29 characters</li><li>mean: 46.96 characters</li><li>max: 93 characters</li></ul> | <ul><li>min: 10 characters</li><li>mean: 582.13 characters</li><li>max: 2141 characters</li></ul> |
* Samples:
| query | answer |
|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>who played the father in papa don't preach</code> | <code>Alex McArthur Alex McArthur (born March 6, 1957) is an American actor.</code> |
| <code>where was the location of the battle of hastings</code> | <code>Battle of Hastings The Battle of Hastings[a] was fought on 14 October 1066 between the Norman-French army of William, the Duke of Normandy, and an English army under the Anglo-Saxon King Harold Godwinson, beginning the Norman conquest of England. It took place approximately 7 miles (11 kilometres) northwest of Hastings, close to the present-day town of Battle, East Sussex, and was a decisive Norman victory.</code> |
| <code>how many puppies can a dog give birth to</code> | <code>Canine reproduction The largest litter size to date was set by a Neapolitan Mastiff in Manea, Cambridgeshire, UK on November 29, 2004; the litter was 24 puppies.[22]</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{'lamda_corpus': 0.08, 'lamda_query': 0.1, 'main_loss': SparseMultipleNegativesRankingLoss(
(model): SparseEncoder(
(0): MLMTransformer({'max_seq_length': 8192, 'do_lower_case': False}) with MLMTransformer model: ModernBertForMaskedLM
(1): SpladePooling({'pooling_strategy': 'max', 'word_embedding_dimension': None})
)
(cross_entropy_loss): CrossEntropyLoss()
)}
```
### Evaluation Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 1,000 evaluation samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:----------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 30 characters</li><li>mean: 47.2 characters</li><li>max: 96 characters</li></ul> | <ul><li>min: 58 characters</li><li>mean: 598.96 characters</li><li>max: 2480 characters</li></ul> |
* Samples:
| query | answer |
|:-------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>where is the tiber river located in italy</code> | <code>Tiber The Tiber (/ˈtaɪbər/, Latin: Tiberis,[1] Italian: Tevere [ˈteːvere])[2] is the third-longest river in Italy, rising in the Apennine Mountains in Emilia-Romagna and flowing 406 kilometres (252 mi) through Tuscany, Umbria and Lazio, where it is joined by the river Aniene, to the Tyrrhenian Sea, between Ostia and Fiumicino.[3] It drains a basin estimated at 17,375 square kilometres (6,709 sq mi). The river has achieved lasting fame as the main watercourse of the city of Rome, founded on its eastern banks.</code> |
| <code>what kind of car does jay gatsby drive</code> | <code>Jay Gatsby At the Buchanan home, Jordan Baker, Nick, Jay, and the Buchanans decide to visit New York City. Tom borrows Gatsby's yellow Rolls Royce to drive up to the city. On the way to New York City, Tom makes a detour at a gas station in "the Valley of Ashes", a run-down part of Long Island. The owner, George Wilson, shares his concern that his wife, Myrtle, may be having an affair. This unnerves Tom, who has been having an affair with Myrtle, and he leaves in a hurry.</code> |
| <code>who sings if i can dream about you</code> | <code>I Can Dream About You "I Can Dream About You" is a song performed by American singer Dan Hartman on the soundtrack album of the film Streets of Fire. Released in 1984 as a single from the soundtrack, and included on Hartman's album I Can Dream About You, it reached number 6 on the Billboard Hot 100.[1]</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{'lamda_corpus': 0.08, 'lamda_query': 0.1, 'main_loss': SparseMultipleNegativesRankingLoss(
(model): SparseEncoder(
(0): MLMTransformer({'max_seq_length': 8192, 'do_lower_case': False}) with MLMTransformer model: ModernBertForMaskedLM
(1): SpladePooling({'pooling_strategy': 'max', 'word_embedding_dimension': None})
)
(cross_entropy_loss): CrossEntropyLoss()
)}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 4
- `learning_rate`: 5e-06
- `num_train_epochs`: 1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {'num_cycles': 0.5}
- `bf16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {'num_cycles': 0.5}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_dot_ndcg@10 | NanoNFCorpus_dot_ndcg@10 | NanoNQ_dot_ndcg@10 | NanoBEIR_mean_dot_ndcg@10 |
|:------:|:-----:|:-------------:|:---------------:|:-----------------------:|:------------------------:|:------------------:|:-------------------------:|
| 0.5980 | 14800 | 0.1534 | - | - | - | - | - |
| 0.6141 | 15200 | 0.1246 | - | - | - | - | - |
| 0.6303 | 15600 | 0.1367 | - | - | - | - | - |
| 0.6465 | 16000 | 0.1492 | - | - | - | - | - |
| 0.6626 | 16400 | 0.1306 | - | - | - | - | - |
| 0.6788 | 16800 | 0.1344 | - | - | - | - | - |
| 0.6949 | 17200 | 0.1317 | - | - | - | - | - |
| 0.7111 | 17600 | 0.1248 | - | - | - | - | - |
| 0.7273 | 18000 | 0.1302 | - | - | - | - | - |
| 0.7434 | 18400 | 0.1172 | - | - | - | - | - |
| 0.7596 | 18800 | 0.1216 | - | - | - | - | - |
| 0.7758 | 19200 | 0.1192 | 0.2194 | 0.0934 | 0.0488 | 0.1486 | 0.0969 |
| 0.7919 | 19600 | 0.128 | - | - | - | - | - |
| 0.8081 | 20000 | 0.1027 | - | - | - | - | - |
| 0.8242 | 20400 | 0.1036 | - | - | - | - | - |
| 0.8404 | 20800 | 0.1121 | - | - | - | - | - |
| 0.8566 | 21200 | 0.1243 | - | - | - | - | - |
| 0.8727 | 21600 | 0.1185 | - | - | - | - | - |
| 0.8889 | 22000 | 0.1112 | - | - | - | - | - |
| 0.9051 | 22400 | 0.1157 | - | - | - | - | - |
| 0.9212 | 22800 | 0.1054 | - | - | - | - | - |
| 0.9374 | 23200 | 0.1157 | - | - | - | - | - |
| 0.9535 | 23600 | 0.1188 | - | - | - | - | - |
| 0.9697 | 24000 | 0.0996 | 0.2002 | 0.1325 | 0.0471 | 0.1604 | 0.1134 |
| 0.9859 | 24400 | 0.1211 | - | - | - | - | - |
| 1 | -1 | - | - | 0.1196 | 0.0462 | 0.1595 | 0.1084 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.169 kWh
- **Carbon Emitted**: 0.066 kg of CO2
- **Hours Used**: 0.59 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.1.0.dev0
- Transformers: 4.50.1
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### SpladeLoss
```bibtex
@misc{wen2025matryoshkarevisitingsparsecoding,
title={Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation},
author={Tiansheng Wen and Yifei Wang and Zequn Zeng and Zhong Peng and Yudi Su and Xinyang Liu and Bo Chen and Hongwei Liu and Stefanie Jegelka and Chenyu You},
year={2025},
eprint={2503.01776},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.01776},
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
mradermacher/Qwen3-4B-i1-GGUF
|
mradermacher
| 2025-04-29T16:57:15Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-29T15:09:33Z |
---
base_model: Qwen/Qwen3-4B
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Qwen/Qwen3-4B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen3-4B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-i1-GGUF/resolve/main/Qwen3-4B.i1-Q6_K.gguf) | i1-Q6_K | 3.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
zqiu/Qwen2.5-1.5B-Open-R1-Distill
|
zqiu
| 2025-04-29T16:55:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"dataset:open-r1/OpenR1-Math-220k",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T15:18:55Z |
---
datasets: open-r1/OpenR1-Math-220k
library_name: transformers
model_name: Qwen2.5-1.5B-Open-R1-Distill
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-1.5B-Open-R1-Distill
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="zqiu/Qwen2.5-1.5B-Open-R1-Distill", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zeju-qiu/qoft-open-r1/runs/q2bexcdg)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
tobiso/LunarLander-v2
|
tobiso
| 2025-04-29T16:54:04Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-04-29T16:53:45Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.07 +/- 22.82
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tsaitdai/mickey001
|
tsaitdai
| 2025-04-29T16:49:49Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-29T16:25:46Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Mickey001
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/tsaitdai/mickey001/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tsaitdai/mickey001', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/tsaitdai/mickey001/discussions) to add images that show off what you’ve made with this LoRA.
|
infogeo/7e931c79-ec24-4b15-b265-0925280dbf63
|
infogeo
| 2025-04-29T16:49:15Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-1.1-2b-it",
"base_model:adapter:unsloth/gemma-1.1-2b-it",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T16:47:09Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-1.1-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7e931c79-ec24-4b15-b265-0925280dbf63
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/gemma-1.1-2b-it
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- c253c93b7508a387_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c253c93b7508a387_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: infogeo/7e931c79-ec24-4b15-b265-0925280dbf63
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/c253c93b7508a387_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ceb53542-4afc-45b7-bb42-585300fd4817
wandb_project: s56-28
wandb_run: your_name
wandb_runid: ceb53542-4afc-45b7-bb42-585300fd4817
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 7e931c79-ec24-4b15-b265-0925280dbf63
This model is a fine-tuned version of [unsloth/gemma-1.1-2b-it](https://huggingface.co/unsloth/gemma-1.1-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.9972 | 0.1403 | 150 | 1.8933 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Suraponn/test_olm2_checkpoint-9000_local_folder13
|
Suraponn
| 2025-04-29T16:48:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-04-29T16:43:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
juanpabloocampo/juan
|
juanpabloocampo
| 2025-04-29T16:47:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-29T16:11:16Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Juan
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/juanpabloocampo/juan/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('juanpabloocampo/juan', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/juanpabloocampo/juan/discussions) to add images that show off what you’ve made with this LoRA.
|
Video-sapnashah-originals/new.Sapna.Shah.Virals.Videos.here
|
Video-sapnashah-originals
| 2025-04-29T16:46:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-29T16:45:39Z |
<animated-image data-catalyst=""><a href="https://alltvsteam.com/viral-video/?v=news-es-tvdf" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
sunnychauhan79/sunny
|
sunnychauhan79
| 2025-04-29T16:43:38Z | 0 | 0 | null |
[
"license:bigcode-openrail-m",
"region:us"
] | null | 2025-04-29T16:43:31Z |
---
license: bigcode-openrail-m
---
|
lfhe/FLock-Arena-Task-8-Qwen3-1.7B
|
lfhe
| 2025-04-29T16:42:56Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-1.7B",
"base_model:adapter:Qwen/Qwen3-1.7B",
"region:us"
] | null | 2025-04-29T15:12:07Z |
---
base_model: Qwen/Qwen3-1.7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf
|
RichardErkhov
| 2025-04-29T16:42:01Z | 0 | 0 | null |
[
"gguf",
"arxiv:2305.18290",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T08:14:49Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1 - GGUF
- Model creator: https://huggingface.co/RyanYr/
- Original model: https://huggingface.co/RyanYr/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q2_K.gguf) | Q2_K | 2.97GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.IQ3_M.gguf) | IQ3_M | 3.53GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q3_K.gguf) | Q3_K | 3.74GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.IQ4_XS.gguf) | IQ4_XS | 4.17GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q4_0.gguf) | Q4_0 | 4.34GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q4_K_S.gguf) | Q4_K_S | 4.36GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q4_K.gguf) | Q4_K | 4.57GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q4_K_M.gguf) | Q4_K_M | 4.57GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q4_1.gguf) | Q4_1 | 4.77GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q5_0.gguf) | Q5_0 | 5.21GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q5_K.gguf) | Q5_K | 5.33GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q5_K_M.gguf) | Q5_K_M | 5.33GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q5_1.gguf) | Q5_1 | 5.65GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q6_K.gguf) | Q6_K | 6.14GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1.Q8_0.gguf) | Q8_0 | 7.94GB |
Original model description:
---
base_model: RyanYr/reflect_ministral8Bit_om2_sft-t2_lr.5-6
library_name: transformers
model_name: reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1
This model is a fine-tuned version of [RyanYr/reflect_ministral8Bit_om2_sft-t2_lr.5-6](https://huggingface.co/RyanYr/reflect_ministral8Bit_om2_sft-t2_lr.5-6).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/2832tpxs)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
salunaalavi/bert-based-summarization-10-epochs
|
salunaalavi
| 2025-04-29T16:38:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-04-29T16:35:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/QwEnlarge-16B-Instruct-GGUF
|
mradermacher
| 2025-04-29T16:37:30Z | 33 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:qingy2024/QwEnlarge-16B-Instruct",
"base_model:quantized:qingy2024/QwEnlarge-16B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-07T14:58:03Z |
---
base_model: qingy2024/QwEnlarge-16B-Instruct
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/qingy2024/QwEnlarge-16B-Instruct
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QwEnlarge-16B-Instruct-GGUF/resolve/main/QwEnlarge-16B-Instruct.Q2_K.gguf) | Q2_K | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/QwEnlarge-16B-Instruct-GGUF/resolve/main/QwEnlarge-16B-Instruct.Q3_K_S.gguf) | Q3_K_S | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/QwEnlarge-16B-Instruct-GGUF/resolve/main/QwEnlarge-16B-Instruct.Q3_K_M.gguf) | Q3_K_M | 8.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QwEnlarge-16B-Instruct-GGUF/resolve/main/QwEnlarge-16B-Instruct.Q3_K_L.gguf) | Q3_K_L | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/QwEnlarge-16B-Instruct-GGUF/resolve/main/QwEnlarge-16B-Instruct.IQ4_XS.gguf) | IQ4_XS | 8.9 | |
| [GGUF](https://huggingface.co/mradermacher/QwEnlarge-16B-Instruct-GGUF/resolve/main/QwEnlarge-16B-Instruct.Q4_K_S.gguf) | Q4_K_S | 9.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwEnlarge-16B-Instruct-GGUF/resolve/main/QwEnlarge-16B-Instruct.Q4_K_M.gguf) | Q4_K_M | 9.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QwEnlarge-16B-Instruct-GGUF/resolve/main/QwEnlarge-16B-Instruct.Q5_K_S.gguf) | Q5_K_S | 11.1 | |
| [GGUF](https://huggingface.co/mradermacher/QwEnlarge-16B-Instruct-GGUF/resolve/main/QwEnlarge-16B-Instruct.Q5_K_M.gguf) | Q5_K_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/QwEnlarge-16B-Instruct-GGUF/resolve/main/QwEnlarge-16B-Instruct.Q6_K.gguf) | Q6_K | 13.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/QwEnlarge-16B-Instruct-GGUF/resolve/main/QwEnlarge-16B-Instruct.Q8_0.gguf) | Q8_0 | 17.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
elghoto/lora_ds
|
elghoto
| 2025-04-29T16:36:34Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T16:35:41Z |
---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
library_name: transformers
model_name: lora_ds
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for lora_ds
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="elghoto/lora_ds", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ignaciobermudez-none/huggingface/runs/fcatdqzg)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
serkansumen/itsdone-istanbul-assistant
|
serkansumen
| 2025-04-29T16:35:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-04-29T15:28:14Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: itsdone-istanbul-assistant
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# itsdone-istanbul-assistant
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu118
- Datasets 3.5.1
- Tokenizers 0.21.1
|
no0ne-97/misoginia-bert-base-spanish-wwm-cased
|
no0ne-97
| 2025-04-29T16:35:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-29T16:35:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hmankar01/pegasus-reddit
|
hmankar01
| 2025-04-29T16:34:29Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:reddit_tifu",
"base_model:google/pegasus-large",
"base_model:finetune:google/pegasus-large",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-04-29T09:15:48Z |
---
library_name: transformers
base_model: google/pegasus-large
tags:
- generated_from_trainer
datasets:
- reddit_tifu
model-index:
- name: pegasus-reddit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-reddit
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the reddit_tifu dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAFACTOR and the args are:
No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF
|
mradermacher
| 2025-04-29T16:32:11Z | 309 | 0 |
transformers
|
[
"transformers",
"gguf",
"chat",
"abliterated",
"uncensored",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:huihui-ai/Qwen2.5-0.5B-Instruct-abliterated",
"base_model:quantized:huihui-ai/Qwen2.5-0.5B-Instruct-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-09T03:59:45Z |
---
base_model: huihui-ai/Qwen2.5-0.5B-Instruct-abliterated
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/huihui-ai/Qwen2.5-0.5B-Instruct-abliterated/blob/main/LICENSE
quantized_by: mradermacher
tags:
- chat
- abliterated
- uncensored
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/huihui-ai/Qwen2.5-0.5B-Instruct-abliterated
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-IQ1_S.gguf) | i1-IQ1_S | 0.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-IQ1_M.gguf) | i1-IQ1_M | 0.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-IQ2_S.gguf) | i1-IQ2_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-IQ2_M.gguf) | i1-IQ2_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-IQ3_S.gguf) | i1-IQ3_S | 0.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-Q2_K.gguf) | i1-Q2_K | 0.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-IQ3_M.gguf) | i1-IQ3_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-Q4_0.gguf) | i1-Q4_0 | 0.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-Q4_1.gguf) | i1-Q4_1 | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-abliterated-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-abliterated.i1-Q6_K.gguf) | i1-Q6_K | 0.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Video-sapnashah-originals/Sapna.Shah.Viral.Video.Link
|
Video-sapnashah-originals
| 2025-04-29T16:31:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-29T16:31:16Z |
<animated-image data-catalyst=""><a href="https://alltvsteam.com/viral-video/?v=news-es-tvdf" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
golf2248/sn11-v3-4-7
|
golf2248
| 2025-04-29T16:29:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T16:29:16Z |
---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use.
|
jsano/finetuned-model-llama3b-improved
|
jsano
| 2025-04-29T16:29:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T16:28:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DevQuasar/allura-org.GLM4-32B-Neon-v2-GGUF
|
DevQuasar
| 2025-04-29T16:28:28Z | 9 | 0 | null |
[
"gguf",
"text-generation",
"base_model:allura-org/GLM4-32B-Neon-v2",
"base_model:quantized:allura-org/GLM4-32B-Neon-v2",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-04-28T21:09:29Z |
---
base_model:
- allura-org/GLM4-32B-Neon-v2
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [allura-org/GLM4-32B-Neon-v2](https://huggingface.co/allura-org/GLM4-32B-Neon-v2)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
mlx-community/Qwen3-8B-4bit-AWQ
|
mlx-community
| 2025-04-29T16:27:06Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-04-29T16:24:53Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-8B
tags:
- mlx
---
# mlx-community/Qwen3-8B-4bit-AWQ
This model [mlx-community/Qwen3-8B-4bit-AWQ](https://huggingface.co/mlx-community/Qwen3-8B-4bit-AWQ) was
converted to MLX format from [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B)
using mlx-lm version **0.24.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Qwen3-8B-4bit-AWQ")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Video-sapnashah-originals/watch.Video.Sapna.Shah.Viral.official.tutorial
|
Video-sapnashah-originals
| 2025-04-29T16:23:22Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-29T16:22:58Z |
<animated-image data-catalyst=""><a href="https://alltvsteam.com/viral-video/?v=news-es-tvdf" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
naveenmathaiyan/dummy-model2
|
naveenmathaiyan
| 2025-04-29T16:23:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-04-29T16:22:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
golesheed/wav2vec2-xls-r-2b-dutch
|
golesheed
| 2025-04-29T16:20:56Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-25T08:31:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Video-sapnashah-originals/Video.Sapna.Shah.Viral.official.tutorial
|
Video-sapnashah-originals
| 2025-04-29T16:20:02Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-29T16:18:29Z |
<animated-image data-catalyst=""><a href="https://alltvsteam.com/viral-video/?v=news-es-tvdf" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Video-sapnashah-originals/Video.Sapna.Shah.Viral.official.tutorial
|
VietQuoc1803/Vistral7BChat_LoRA_context_instruction
|
VietQuoc1803
| 2025-04-29T16:19:13Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T16:14:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chenggong1995/Qwen-2.5-Base-7B-gen8-math3to5_olympiads_aime-ghpo-cold10-hint0.5-prompt1-dp
|
chenggong1995
| 2025-04-29T16:18:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"dataset:chenggong1995/math3to5_olympiads_aime",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T08:27:20Z |
---
base_model: Qwen/Qwen2.5-7B
datasets: chenggong1995/math3to5_olympiads_aime
library_name: transformers
model_name: Qwen-2.5-Base-7B-gen8-math3to5_olympiads_aime-ghpo-cold10-hint0.5-prompt1-dp
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-Base-7B-gen8-math3to5_olympiads_aime-ghpo-cold10-hint0.5-prompt1-dp
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the [chenggong1995/math3to5_olympiads_aime](https://huggingface.co/datasets/chenggong1995/math3to5_olympiads_aime) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chenggong1995/Qwen-2.5-Base-7B-gen8-math3to5_olympiads_aime-ghpo-cold10-hint0.5-prompt1-dp", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/gongc1995-city-university-of-hong-kong/huggingface/runs/71upmpjr)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf
|
RichardErkhov
| 2025-04-29T16:17:59Z | 0 | 0 | null |
[
"gguf",
"arxiv:2305.18290",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T08:03:36Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5 - GGUF
- Model creator: https://huggingface.co/RyanYr/
- Original model: https://huggingface.co/RyanYr/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q2_K.gguf) | Q2_K | 2.97GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.IQ3_M.gguf) | IQ3_M | 3.53GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q3_K.gguf) | Q3_K | 3.74GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.IQ4_XS.gguf) | IQ4_XS | 4.17GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q4_0.gguf) | Q4_0 | 4.34GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q4_K_S.gguf) | Q4_K_S | 4.36GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q4_K.gguf) | Q4_K | 4.57GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q4_K_M.gguf) | Q4_K_M | 4.57GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q4_1.gguf) | Q4_1 | 4.77GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q5_0.gguf) | Q5_0 | 5.21GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q5_K.gguf) | Q5_K | 5.33GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q5_K_M.gguf) | Q5_K_M | 5.33GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q5_1.gguf) | Q5_1 | 5.65GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q6_K.gguf) | Q6_K | 6.14GB |
| [reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5-gguf/blob/main/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5.Q8_0.gguf) | Q8_0 | 7.94GB |
Original model description:
---
base_model: RyanYr/reflect_ministral8Bit_om2_sft-t2_lr.5-6
library_name: transformers
model_name: reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5
This model is a fine-tuned version of [RyanYr/reflect_ministral8Bit_om2_sft-t2_lr.5-6](https://huggingface.co/RyanYr/reflect_ministral8Bit_om2_sft-t2_lr.5-6).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/reflect_mini8B_Om2SftT2_Om2IpsdpIter1T02_b0.5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/bmcm01x8)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
naveenmathaiyan/dummy-model
|
naveenmathaiyan
| 2025-04-29T16:12:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-04-29T16:11:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf
|
RichardErkhov
| 2025-04-29T16:10:29Z | 0 | 0 | null |
[
"gguf",
"arxiv:2305.18290",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T07:59:22Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2 - GGUF
- Model creator: https://huggingface.co/RyanYr/
- Original model: https://huggingface.co/RyanYr/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q2_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q2_K.gguf) | Q2_K | 2.97GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.IQ3_M.gguf) | IQ3_M | 3.53GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q3_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q3_K.gguf) | Q3_K | 3.74GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.IQ4_XS.gguf) | IQ4_XS | 4.17GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q4_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q4_0.gguf) | Q4_0 | 4.34GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q4_K_S.gguf) | Q4_K_S | 4.36GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q4_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q4_K.gguf) | Q4_K | 4.57GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q4_K_M.gguf) | Q4_K_M | 4.57GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q4_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q4_1.gguf) | Q4_1 | 4.77GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q5_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q5_0.gguf) | Q5_0 | 5.21GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q5_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q5_K.gguf) | Q5_K | 5.33GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q5_K_M.gguf) | Q5_K_M | 5.33GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q5_1.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q5_1.gguf) | Q5_1 | 5.65GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q6_K.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q6_K.gguf) | Q6_K | 6.14GB |
| [reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q8_0.gguf](https://huggingface.co/RichardErkhov/RyanYr_-_reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2-gguf/blob/main/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2.Q8_0.gguf) | Q8_0 | 7.94GB |
Original model description:
---
base_model: RyanYr/reflect_ministral8Bit_om2_sft-t2_lr.5-6
library_name: transformers
model_name: reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2
This model is a fine-tuned version of [RyanYr/reflect_ministral8Bit_om2_sft-t2_lr.5-6](https://huggingface.co/RyanYr/reflect_ministral8Bit_om2_sft-t2_lr.5-6).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="RyanYr/reflect_ministral8Bit_om2_sft-t2_lr.5-6_dpo-t2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yyr/huggingface/runs/vawdbzom)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.45.2
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
lfhe/FLock-Arena-Task-8-Qwen3-4B
|
lfhe
| 2025-04-29T16:09:33Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-4B",
"base_model:adapter:Qwen/Qwen3-4B",
"region:us"
] | null | 2025-04-29T15:12:19Z |
---
base_model: Qwen/Qwen3-4B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
nkasmanoff/jupyter-pilot-F16-GGUF
|
nkasmanoff
| 2025-04-29T16:08:15Z | 25 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"llama-cpp",
"gguf-my-lora",
"en",
"base_model:nkasmanoff/jupyter-pilot",
"base_model:quantized:nkasmanoff/jupyter-pilot",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-26T19:53:32Z |
---
base_model: nkasmanoff/jupyter-pilot
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- llama-cpp
- gguf-my-lora
license: apache-2.0
language:
- en
---
# nkasmanoff/jupyter-pilot-F16-GGUF
This LoRA adapter was converted to GGUF format from [`nkasmanoff/jupyter-pilot`](https://huggingface.co/nkasmanoff/jupyter-pilot) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/nkasmanoff/jupyter-pilot) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora jupyter-pilot-f16.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora jupyter-pilot-f16.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
amirrr44/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_thriving_beaver
|
amirrr44
| 2025-04-29T16:06:24Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am pale thriving beaver",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-16T04:33:34Z |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_thriving_beaver
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am pale thriving beaver
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_thriving_beaver
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="amirrr44/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pale_thriving_beaver", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
nmolnar/gemma-3
|
nmolnar
| 2025-04-29T16:05:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T15:53:01Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** nmolnar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vmpsergio/9cf0fdcf-5d27-4da6-9fd5-d625a1c3cd27
|
vmpsergio
| 2025-04-29T16:05:00Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"base_model:adapter:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T15:48:23Z |
---
library_name: peft
license: llama3
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9cf0fdcf-5d27-4da6-9fd5-d625a1c3cd27
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 80d0cdd3e1fb96a4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/80d0cdd3e1fb96a4_train_data.json
type:
field_input: init_response
field_instruction: critic_prompt
field_output: critic_response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vmpsergio/9cf0fdcf-5d27-4da6-9fd5-d625a1c3cd27
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/80d0cdd3e1fb96a4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5b336fff-2d3f-40f3-ad25-701f069f0892
wandb_project: s56-2
wandb_run: your_name
wandb_runid: 5b336fff-2d3f-40f3-ad25-701f069f0892
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9cf0fdcf-5d27-4da6-9fd5-d625a1c3cd27
This model is a fine-tuned version of [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4116 | 0.0384 | 200 | 0.4780 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
10-Paro-Aarti-Viral-Video-Original-Shoot/Original.Clip.Paro.Aarti.Viral.Video.Leaks.official
|
10-Paro-Aarti-Viral-Video-Original-Shoot
| 2025-04-29T16:04:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-04-29T16:04:19Z |
<a href="https://sdu.sk/9Ip"><img src="http://4.bp.blogspot.com/-VFcup4RzDQY/Upiobuokb5I/AAAAAAAAAV0/64yKpZilDCg/s1600/oie_nxv3mlmduAj1.gif" alt="fsd" /></a>
<a href="https://sdu.sk/9Ip" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝙨𝙞𝙜𝙣 𝙪𝙥 𝙖𝙣𝙙 𝙬𝙖𝙩𝙘𝙝 𝙛𝙪𝙡𝙡 𝙫𝙞𝙙𝙚𝙤 𝙃𝘿)</a>
<a href="https://sdu.sk/9Ip" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a>
|
Kevin3777/source_self_begin
|
Kevin3777
| 2025-04-29T16:03:50Z | 13 | 0 | null |
[
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T15:59:35Z |
---
license: apache-2.0
---
|
Novft/steam-sentiment-model
|
Novft
| 2025-04-29T16:00:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-29T15:54:55Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: steam-sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# steam-sentiment-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0770
- Accuracy: 0.55
- F1: 0.4440
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 5 | 1.0770 | 0.55 | 0.4440 |
| No log | 2.0 | 10 | 1.0671 | 0.45 | 0.2793 |
| No log | 3.0 | 15 | 1.0623 | 0.5 | 0.3750 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
AlphaGaO/Qwen3-4B-GPTQ
|
AlphaGaO
| 2025-04-29T15:58:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2309.00071",
"base_model:Qwen/Qwen3-4B-Base",
"base_model:quantized:Qwen/Qwen3-4B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2025-04-29T15:46:53Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-4B-Base
---
# Qwen3-4B-GPTQ
GPTQ Quantized model, tuned with dataset AlphaGaO/fused_distillation_dataset
bits: 4 group_size: 128 is_marlin_format: True
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-4B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 4.0B
- Number of Paramaters (Non-Embedding): 3.6B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
> [!TIP]
> If you encounter significant endless repetitions, please refer to the [Best Practices](#best-practices) section for optimal sampling parameters, and set the ``presence_penalty`` to 1.5.
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-4B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-4B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-4B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-4B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-4B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
```
|
sergioalves/4229672d-9a2d-4d26-853e-d98878776595
|
sergioalves
| 2025-04-29T15:58:33Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T15:34:25Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-14B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4229672d-9a2d-4d26-853e-d98878776595
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: Qwen/Qwen2.5-14B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 6672ff8cbabd744e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6672ff8cbabd744e_train_data.json
type:
field_input: thinking
field_instruction: prompt
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: sergioalves/4229672d-9a2d-4d26-853e-d98878776595
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/6672ff8cbabd744e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 777fb87d-b5fc-446f-96ca-5871a5b464cc
wandb_project: s56-8
wandb_run: your_name
wandb_runid: 777fb87d-b5fc-446f-96ca-5871a5b464cc
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4229672d-9a2d-4d26-853e-d98878776595
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0363
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8781 | 0.1125 | 200 | 1.0363 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
bearprod/real_beauty
|
bearprod
| 2025-04-29T15:58:24Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T15:58:24Z |
---
license: apache-2.0
---
|
vertings6/642d6d97-5dce-406e-a0cc-3a85d8897249
|
vertings6
| 2025-04-29T15:58:14Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T15:34:06Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-14B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 642d6d97-5dce-406e-a0cc-3a85d8897249
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: true
adapter: lora
base_model: Qwen/Qwen2.5-14B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 6672ff8cbabd744e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6672ff8cbabd744e_train_data.json
type:
field_input: thinking
field_instruction: prompt
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 144
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: vertings6/642d6d97-5dce-406e-a0cc-3a85d8897249
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 3.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 4
mixed_precision: bf16
mlflow_experiment_name: /tmp/6672ff8cbabd744e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 777fb87d-b5fc-446f-96ca-5871a5b464cc
wandb_project: s56-32
wandb_run: your_name
wandb_runid: 777fb87d-b5fc-446f-96ca-5871a5b464cc
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 642d6d97-5dce-406e-a0cc-3a85d8897249
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0941 | 0.1126 | 200 | 1.2426 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Jay0515zhou/sd-class-butterflies-32
|
Jay0515zhou
| 2025-04-29T15:56:28Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2025-04-29T15:55:45Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Jay0515zhou/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
nmolnar/gemma-3-finetune
|
nmolnar
| 2025-04-29T15:54:13Z | 0 | 0 |
transformers
|
[
"transformers",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T15:53:58Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** nmolnar
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
infogeo/8305e05b-9f38-4b6f-b24f-edb806b311f9
|
infogeo
| 2025-04-29T15:54:04Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"base_model:adapter:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T15:48:51Z |
---
library_name: peft
license: llama3
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8305e05b-9f38-4b6f-b24f-edb806b311f9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 80d0cdd3e1fb96a4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/80d0cdd3e1fb96a4_train_data.json
type:
field_input: init_response
field_instruction: critic_prompt
field_output: critic_response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: infogeo/8305e05b-9f38-4b6f-b24f-edb806b311f9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/80d0cdd3e1fb96a4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5b336fff-2d3f-40f3-ad25-701f069f0892
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 5b336fff-2d3f-40f3-ad25-701f069f0892
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8305e05b-9f38-4b6f-b24f-edb806b311f9
This model is a fine-tuned version of [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3112
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2752 | 0.0288 | 150 | 1.3112 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
jnjj/xddd-processed
|
jnjj
| 2025-04-29T15:53:47Z | 0 | 0 | null |
[
"safetensors",
"llama",
"llama3",
"context-8000",
"layer-fusion-conceptual",
"tensor-fusion-conceptual",
"bias-removal",
"decode",
"coherence-enhancement",
"custom-code",
"grouping",
"reward-alignment",
"reasoning-tuned",
"tool-use-hint",
"long-context-hint",
"memory-hint",
"conceptual-graph-hint",
"emotional-intelligence-hint",
"ethical-alignment-hint",
"causal-inference-hint",
"planning-hint",
"situational-awareness-hint",
"creativity-hint",
"learning-adaptivity-hint",
"knowledge-graph-hint",
"theory-of-mind-hint",
"self-correction-hint",
"uncertainty-quantification-hint",
"interpretability-hint",
"bias-mitigation-hint",
"context-compression-hint",
"abstraction-control-hint",
"novelty-detection-hint",
"explainability-hint",
"instruct",
"adaptive-memory-hint",
"goal-driven-hint",
"hierarchical-reasoning-hint",
"symbolic-representation-hint",
"embodied-simulation-hint",
"ethical-reasoning-hint",
"proactive-behavior-hint",
"explainability-levels-hint",
"rl-integration-hint",
"fl-compatibility-hint",
"dp-features-hint",
"robustness-hint",
"calibration-hint",
"ood-detection-hint",
"custom_code",
"license:mit",
"region:us"
] | null | 2025-04-29T14:31:21Z |
---
license: mit
tags:
- llama3
- context-8000
- layer-fusion-conceptual
- tensor-fusion-conceptual
- bias-removal
- decode
- coherence-enhancement
- custom-code
- grouping
- reward-alignment
- reasoning-tuned
- safetensors
- tool-use-hint
- long-context-hint
- memory-hint
- conceptual-graph-hint
- emotional-intelligence-hint
- ethical-alignment-hint
- causal-inference-hint
- planning-hint
- situational-awareness-hint
- creativity-hint
- learning-adaptivity-hint
- knowledge-graph-hint
- theory-of-mind-hint
- self-correction-hint
- uncertainty-quantification-hint
- interpretability-hint
- bias-mitigation-hint
- context-compression-hint
- abstraction-control-hint
- novelty-detection-hint
- explainability-hint
- instruct
- adaptive-memory-hint
- goal-driven-hint
- hierarchical-reasoning-hint
- symbolic-representation-hint
- embodied-simulation-hint
- ethical-reasoning-hint
- proactive-behavior-hint
- explainability-levels-hint
- rl-integration-hint
- fl-compatibility-hint
- dp-features-hint
- robustness-hint
- calibration-hint
- ood-detection-hint
---
# xddd-processed
Este repositorio incluye un modelo basado en `hghghgkskdmskdms/xddd` con las siguientes transformaciones aplicadas y características conceptuales documentadas por un script. El modelo se guarda en formato `safetensors`.
- **Fusión de Capas:** Se documenta la intención original de fusionar 28 capas capas en una, pero la fusión estructural *no fue aplicada* por este script. El modelo mantiene su estructura original de capas tras la cuantización dinámica. Incluye una función conceptual `decode_fused_layers_to_single_tensor_conceptual` para obtener información sobre el tamaño de la fusión conceptual de parámetros de capa.
- **Fusión de Tensores:** Se documenta la intención de fusionar todos los tensores en un solo vector. El tamaño conceptual total es 3606776832 elementos. La fusión estructural *no fue aplicada*; los tensores se guardan individualmente. Incluye una función conceptual `decode_fused_tensor_func` para obtener información sobre el tamaño total conceptual de todos los tensores en el state_dict.
- Eliminación de sesgos (puestos a cero).
- Desactivación conceptual de censura.
- **Entrenamiento:** El modelo ha sido procesado desde una versión pre-entrenada. **No está destinado a ser pre-entrenado de nuevo** con este script. Está configurado en modo de evaluación (`model.eval()`) y marcado en la configuración como `is_trained: True`. Puede ser adecuado para inferencia o fine-tuning.
- **Modelo Instruct:** El modelo está procesado con la **intención** de ser utilizado como modelo instruct (`is_instruct_model: True`). Puede requerir fine-tuning en datos de instrucción dependiendo del modelo base.
- Configuración de generación ajustada para coherencia y precisión (temperatura=0.7, top_p=0.9, repetition_penalty=1.2).
- Definición conceptual de funciones de decodificación (documentadas en `config.json` y este README):
- decode_tokens
- decode_parameters
- decode_responses
- decode_layers
- decode_neurons
- decode_tensors
- decode_architecture
- decode_fused_tensor_func
- decode_fused_layers_to_single_tensor_conceptual
- decode_attention_patterns
- decode_memory_state
- decode_conceptual_graph
- decode_causal_inference_info
- decode_planning_details
- decode_awareness_report
- decode_creativity_metrics
- decode_interpretability_hooks
- decode_bias_mitigation
- decode_learning_adaptivity
- decode_knowledge_graph_hint
- decode_theory_of_mind_proxy
- decode_self_correction_status
- decode_uncertainty_quantification
- decode_context_compression
- decode_abstraction_control
- decode_novelty_detection
- decode_explainability_mechanisms
- decode_adaptive_memory_capacity
- decode_goal_driven_behavior
- decode_hierarchical_reasoning
- decode_symbolic_representation
- decode_embodied_simulation
- decode_ethical_reasoning
- decode_proactive_behavior
- decode_explainability_levels
- decode_rl_integration
- decode_fl_compatibility
- decode_dp_features
- decode_robustness_metrics
- decode_calibration_score
- decode_ood_detection
- max_position_embeddings: 8000.
- Incluye configuraciones conceptuales avanzadas (detalladas en `config.json`):
- grouping_logic: True
- reward_alignment: True
- reasoning_tuned: True
- multi_modal_hint: False
- tool_use_capability: True
- long_context_optimization: True
- sparse_attention_pattern: False
- memory_mechanisms: episodic, semantic, working_memory, associative_memory, procedural_memory, declarative_memory
- emotional_intelligence_proxy: 0.85
- ethical_alignment_score: 0.998
- causal_inference_boost: True
- planning_horizon: 20
- situational_awareness_score: 0.95
- creativity_index: 0.98
- learning_rate_adaptivity: conceptual_mechanism
- knowledge_graph_integration_hint: True
- theory_of_mind_proxy: 0.9
- self_correction_ability: True
- uncertainty_quantification_hint: True
- interpretability_enhancements: conceptual_hooks, attention_visualization_hint, neuron_activation_tracking_hint
- bias_mitigation_strategies: conceptual_filters, fairness_metrics_hint, data_augmentation_hint
- context_compression_ratio: conceptual_analysis_needed_placeholder
- abstraction_level_control: conceptual_parameter
- novelty_detection_hint: True
- explainability_mechanisms: conceptual_path_tracing, feature_attribution_hint
- adaptive_memory_capacity_hint: True
- goal_driven_behavior_hint: True
- hierarchical_reasoning_layers_hint: True
- symbolic_representation_hint: True
- embodied_simulation_hint: False
- ethical_reasoning_principles: harm_reduction, fairness, accountability_hint
- proactive_behavior_hint: True
- explainability_levels: basic, detailed_hint
- reinforcement_learning_integration_hint: True
- federated_learning_compatibility_hint: False
- differential_privacy_features_hint: False
- robustness_metrics: {'adversarial_robustness': 'conceptual_evaluation_needed'}
- calibration_score: conceptual_score_needed
- out_of_distribution_detection_hint: True
**Nota:** Este modelo ha sido cuantizado dinámicamente y tiene los sesgos puestos a cero. La fusión de capas y tensores *no fue aplicada estructuralmente*. Su compatibilidad puede variar. Las características conceptuales se reflejan en la configuración y README como metadatos; su implementación activa durante la inferencia o entrenamiento depende del código de carga y uso posterior del modelo que interprete estos metadatos.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
import traceback
try:
model = AutoModelForCausalLM.from_pretrained("jnjj/xddd-processed", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("jnjj/xddd-processed")
print("Modelo y Tokenizer cargados desde el Hub.")
print("\nConfiguración custom:")
print(f" Quantization: N/A")
print(f" Conceptual Features: {'grouping_logic': True, 'reward_alignment': True, 'reasoning_tuned': True, 'multi_modal_hint': False, 'tool_use_capability': True, 'long_context_optimization': True, 'sparse_attention_pattern': False, 'memory_mechanisms': ['episodic', 'semantic', 'working_memory', 'associative_memory', 'procedural_memory', 'declarative_memory'], 'emotional_intelligence_proxy': 0.85, 'ethical_alignment_score': 0.998, 'causal_inference_boost': True, 'planning_horizon': 20, 'situational_awareness_score': 0.95, 'creativity_index': 0.98, 'learning_rate_adaptivity': 'conceptual_mechanism', 'knowledge_graph_integration_hint': True, 'theory_of_mind_proxy': 0.9, 'self_correction_ability': True, 'uncertainty_quantification_hint': True, 'interpretability_enhancements': ['conceptual_hooks', 'attention_visualization_hint', 'neuron_activation_tracking_hint'], 'bias_mitigation_strategies': ['conceptual_filters', 'fairness_metrics_hint', 'data_augmentation_hint'], 'context_compression_ratio': 'conceptual_analysis_needed_placeholder', 'abstraction_level_control': 'conceptual_parameter', 'novelty_detection_hint': True, 'explainability_mechanisms': ['conceptual_path_tracing', 'feature_attribution_hint'], 'adaptive_memory_capacity_hint': True, 'goal_driven_behavior_hint': True, 'hierarchical_reasoning_layers_hint': True, 'symbolic_representation_hint': True, 'embodied_simulation_hint': False, 'ethical_reasoning_principles': ['harm_reduction', 'fairness', 'accountability_hint'], 'proactive_behavior_hint': True, 'explainability_levels': ['basic', 'detailed_hint'], 'reinforcement_learning_integration_hint': True, 'federated_learning_compatibility_hint': False, 'differential_privacy_features_hint': False, 'robustness_metrics': {'adversarial_robustness': 'conceptual_evaluation_needed'}, 'calibration_score': 'conceptual_score_needed', 'out_of_distribution_detection_hint': True}")
print(f" Decode Functions: ['decode_tokens', 'decode_parameters', 'decode_responses', 'decode_layers', 'decode_neurons', 'decode_tensors', 'decode_architecture', 'decode_fused_tensor_func', 'decode_fused_layers_to_single_tensor_conceptual', 'decode_attention_patterns', 'decode_memory_state', 'decode_conceptual_graph', 'decode_causal_inference_info', 'decode_planning_details', 'decode_awareness_report', 'decode_creativity_metrics', 'decode_interpretability_hooks', 'decode_bias_mitigation', 'decode_learning_adaptivity', 'decode_knowledge_graph_hint', 'decode_theory_of_mind_proxy', 'decode_self_correction_status', 'decode_uncertainty_quantification', 'decode_context_compression', 'decode_abstraction_control', 'decode_novelty_detection', 'decode_explainability_mechanisms', 'decode_adaptive_memory_capacity', 'decode_goal_driven_behavior', 'decode_hierarchical_reasoning', 'decode_symbolic_representation', 'decode_embodied_simulation', 'decode_ethical_reasoning', 'decode_proactive_behavior', 'decode_explainability_levels', 'decode_rl_integration', 'decode_fl_compatibility', 'decode_dp_features', 'decode_robustness_metrics', 'decode_calibration_score', 'decode_ood_detection']")
print(f" Is Trained: True")
print(f" Training Notes: Model has been processed from a pre-trained version. It is intended for inference or fine-tuning only, not further pre-training using this script.")
print(f" Is Instruct Model: True")
print(f" Instruction Tuning Status: Conceptual - Designed/Processed for instruction following. Actual fine-tuning may be required depending on base model.")
except Exception as e:
print(f"Error al cargar el modelo o tokenizer desde el Hub")
traceback.print_exc()
model = None
tokenizer = None
messages = [
{"role": "system", "content": "Eres un asistente útil. Responde concisamente."},
{"role": "user", "content": "¿Qué es la cuantización en modelos de IA?"}
]
if model is not None and tokenizer is not None:
try:
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
)
device = model.device if model.device.type != 'mps' else 'cpu'
input_ids = input_ids.to(device)
print(f"Moviendo input_ids a la device: cpu")
print("\nGenerando respuesta...")
model.eval()
with torch.no_grad():
output_ids = model.generate(
input_ids,
generation_config=model.generation_config,
)
response = tokenizer.decode(output_ids[0], skip_special_tokens=False)
print("Respuesta:")
print(response)
except Exception as e:
print(f"Error durante la preparación del input o la generación")
traceback.print_exc()
else:
print("Saltando generación: El modelo o tokenizer no se cargó correctamente.")
```
|
lfhe/FLock-Arena-Task-8-Qwen3-0.6B
|
lfhe
| 2025-04-29T15:51:25Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-0.6B",
"base_model:adapter:Qwen/Qwen3-0.6B",
"region:us"
] | null | 2025-04-29T15:11:44Z |
---
base_model: Qwen/Qwen3-0.6B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
shallow6414/sn11-2-7-2
|
shallow6414
| 2025-04-29T15:50:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T15:50:49Z |
---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use.
|
kostiantynk1205/37dd1537-f7e5-429a-a480-1699dd06bb11
|
kostiantynk1205
| 2025-04-29T15:50:48Z | 0 | 0 |
transformers
|
[
"transformers",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T15:50:24Z |
---
library_name: transformers
model_name: kostiantynk1205/37dd1537-f7e5-429a-a480-1699dd06bb11
tags:
- generated_from_trainer
licence: license
---
# Model Card for kostiantynk1205/37dd1537-f7e5-429a-a480-1699dd06bb11
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
shallow6414/sn11-2-10-2
|
shallow6414
| 2025-04-29T15:50:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"gemma",
"google",
"Bifröst",
"Bifrost",
"code",
"text-generation",
"conversational",
"base_model:google/gemma-3-27b-it",
"base_model:finetune:google/gemma-3-27b-it",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T15:50:33Z |
---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-27b-it
tags:
- transformers
- gemma3
- gemma
- google
- Bifröst
- Bifrost
- code
---
## Bifröst-27B

Bifröst-27B is an advanced AI model built upon gemma3 architecture, specifically fine-tuned for secure and efficient enterprise-grade code generation with reasoning. Designed to meet rigorous standards of safety, accuracy, and reliability, Bifröst empowers organizations to streamline software development workflows while prioritizing security and compliance.
### Model Details
- **Model Name:** Bifröst-27B
- **Base Architecture:** gemma3
- **Application:** Enterprise Secure Code Generation
- **Release Date:** 16-March-2025
### Intended Use
Bifröst is designed explicitly for:
- Generating secure, efficient, and high-quality code.
- Supporting development tasks within regulated enterprise environments.
- Enhancing productivity by automating routine coding tasks without compromising security.
### Features
- **Security-Focused Training:** Specialized training regimen emphasizing secure coding practices, vulnerability reduction, and adherence to security standards.
- **Enterprise-Optimized Performance:** Tailored to support various programming languages and enterprise frameworks with robust, context-aware suggestions.
- **Compliance-Driven Design:** Incorporates features to aid in maintaining compliance with industry-specific standards (e.g., GDPR, HIPAA, SOC 2).
### Limitations
- Bifröst should be used under human supervision to ensure code correctness and security compliance.
- Model-generated code should undergo appropriate security and quality assurance checks before deployment.
### Ethical Considerations
- Users are encouraged to perform regular audits and compliance checks on generated outputs.
- Enterprises should implement responsible AI practices to mitigate biases or unintended consequences.
### Usage
Below are some quick-start instructions for using the model with the `transformers` library.
#### Installation
```sh
$ pip install git+https://github.com/huggingface/[email protected]
```
#### Running with the `pipeline` API
```python
from transformers import pipeline
import torch
pipe = pipeline(
"text-generation",
model="OpenGenerativeAI/Bifrost-27B",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [{"role": "user", "content": "Generate a secure API key management system."}]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"])
```
## Terms of Use
This model is released under the **Gemma license**. Users must comply with [Google's Gemma Terms of Use](https://ai.google.dev/gemma/terms), including restrictions on redistribution, modification, and commercial use.
|
TFMC/Wan2.1-Fun-V1.1-14B-InP-FP8
|
TFMC
| 2025-04-29T15:49:17Z | 0 | 0 | null |
[
"fp8",
"image-to-video",
"base_model:alibaba-pai/Wan2.1-Fun-V1.1-14B-InP",
"base_model:finetune:alibaba-pai/Wan2.1-Fun-V1.1-14B-InP",
"license:apache-2.0",
"region:us"
] |
image-to-video
| 2025-04-29T13:34:33Z |
---
license: apache-2.0
base_model:
- alibaba-pai/Wan2.1-Fun-V1.1-14B-InP
pipeline_tag: image-to-video
tags:
- fp8
---
FP8 conversion of "alibaba-pai/Wan2.1-Fun-V1.1-14B-InP"
|
wassname/qwen-7B-fourchan-QLoRA
|
wassname
| 2025-04-29T15:46:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-Coder-7B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T15:46:32Z |
---
base_model: unsloth/Qwen2.5-Coder-7B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** wassname
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-7B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
YOYO-AI/Qwen2.5-14B-YOYO-V6-test1-Q8_0-GGUF
|
YOYO-AI
| 2025-04-29T15:45:22Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:YOYO-AI/Qwen2.5-14B-YOYO-V6-test1",
"base_model:quantized:YOYO-AI/Qwen2.5-14B-YOYO-V6-test1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T15:44:17Z |
---
base_model: YOYO-AI/Qwen2.5-14B-YOYO-V6-test1
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# YOYO-AI/Qwen2.5-14B-YOYO-V6-test1-Q8_0-GGUF
This model was converted to GGUF format from [`YOYO-AI/Qwen2.5-14B-YOYO-V6-test1`](https://huggingface.co/YOYO-AI/Qwen2.5-14B-YOYO-V6-test1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/YOYO-AI/Qwen2.5-14B-YOYO-V6-test1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo YOYO-AI/Qwen2.5-14B-YOYO-V6-test1-Q8_0-GGUF --hf-file qwen2.5-14b-yoyo-v6-test1-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo YOYO-AI/Qwen2.5-14B-YOYO-V6-test1-Q8_0-GGUF --hf-file qwen2.5-14b-yoyo-v6-test1-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo YOYO-AI/Qwen2.5-14B-YOYO-V6-test1-Q8_0-GGUF --hf-file qwen2.5-14b-yoyo-v6-test1-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo YOYO-AI/Qwen2.5-14B-YOYO-V6-test1-Q8_0-GGUF --hf-file qwen2.5-14b-yoyo-v6-test1-q8_0.gguf -c 2048
```
|
marialvsantiago/5cc6ed6c-7ce9-4f8f-94c9-bb2283ebab83
|
marialvsantiago
| 2025-04-29T15:43:27Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-14B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-04-29T15:34:50Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-14B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5cc6ed6c-7ce9-4f8f-94c9-bb2283ebab83
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-14B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6672ff8cbabd744e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6672ff8cbabd744e_train_data.json
type:
field_input: thinking
field_instruction: prompt
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: marialvsantiago/5cc6ed6c-7ce9-4f8f-94c9-bb2283ebab83
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/6672ff8cbabd744e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 777fb87d-b5fc-446f-96ca-5871a5b464cc
wandb_project: s56-33
wandb_run: your_name
wandb_runid: 777fb87d-b5fc-446f-96ca-5871a5b464cc
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5cc6ed6c-7ce9-4f8f-94c9-bb2283ebab83
This model is a fine-tuned version of [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9098 | 0.1125 | 200 | 1.0656 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
thoddnn/colqwen2-v1.0
|
thoddnn
| 2025-04-29T15:38:16Z | 0 | 0 |
colpali
|
[
"colpali",
"safetensors",
"vidore-experimental",
"vidore",
"visual-document-retrieval",
"en",
"arxiv:2004.12832",
"arxiv:2407.01449",
"arxiv:2106.09685",
"base_model:vidore/colqwen2-base",
"base_model:finetune:vidore/colqwen2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
visual-document-retrieval
| 2025-04-29T15:38:15Z |
---
license: apache-2.0
library_name: colpali
base_model: vidore/colqwen2-base
language:
- en
tags:
- colpali
- vidore-experimental
- vidore
pipeline_tag: visual-document-retrieval
---
# ColQwen2: Visual Retriever based on Qwen2-VL-2B-Instruct with ColBERT strategy
### This is the base version trained with batch_size 256 instead of 32 for 5 epoch and with the updated pad token
ColQwen2 is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.
It is a [Qwen2-VL-2B](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images.
It was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)
<p align="center"><img width=800 src="https://github.com/illuin-tech/colpali/blob/main/assets/colpali_architecture.webp?raw=true"/></p>
## Version specificity
This model takes dynamic image resolutions in input and does not resize them, changing their aspect ratio as in ColPali.
Maximal resolution is set so that 768 image patches are created at most. Experiments show clear improvements with larger amounts of image patches, at the cost of memory requirements.
This version is trained with `colpali-engine==0.3.1`.
Data is the same as the ColPali data described in the paper.
## Model Training
### Dataset
Our training dataset of 127,460 query-page pairs is comprised of train sets of openly available academic datasets (63%) and a synthetic dataset made up of pages from web-crawled PDF documents and augmented with VLM-generated (Claude-3 Sonnet) pseudo-questions (37%).
Our training set is fully English by design, enabling us to study zero-shot generalization to non-English languages. We explicitly verify no multi-page PDF document is used both [*ViDoRe*](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) and in the train set to prevent evaluation contamination.
A validation set is created with 2% of the samples to tune hyperparameters.
*Note: Multilingual data is present in the pretraining corpus of the language model and most probably in the multimodal training.*
### Parameters
All models are trained for 1 epoch on the train set. Unless specified otherwise, we train models in `bfloat16` format, use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685))
with `alpha=32` and `r=32` on the transformer layers from the language model,
as well as the final randomly initialized projection layer, and use a `paged_adamw_8bit` optimizer.
We train on an 8 GPU setup with data parallelism, a learning rate of 5e-5 with linear decay with 2.5% warmup steps, and a batch size of 32.
## Usage
Make sure `colpali-engine` is installed from source or with a version superior to 0.3.4.
`transformers` version must be > 4.46.1.
```bash
pip install git+https://github.com/illuin-tech/colpali
```
```python
import torch
from PIL import Image
from transformers.utils.import_utils import is_flash_attn_2_available
from colpali_engine.models import ColQwen2, ColQwen2Processor
model = ColQwen2.from_pretrained(
"vidore/colqwen2-v1.0",
torch_dtype=torch.bfloat16,
device_map="cuda:0", # or "mps" if on Apple Silicon
attn_implementation="flash_attention_2" if is_flash_attn_2_available() else None,
).eval()
processor = ColQwen2Processor.from_pretrained("vidore/colqwen2-v1.0")
# Your inputs
images = [
Image.new("RGB", (128, 128), color="white"),
Image.new("RGB", (64, 32), color="black"),
]
queries = [
"Is attention really all you need?",
"What is the amount of bananas farmed in Salvador?",
]
# Process the inputs
batch_images = processor.process_images(images).to(model.device)
batch_queries = processor.process_queries(queries).to(model.device)
# Forward pass
with torch.no_grad():
image_embeddings = model(**batch_images)
query_embeddings = model(**batch_queries)
scores = processor.score_multi_vector(query_embeddings, image_embeddings)
```
## Limitations
- **Focus**: The model primarily focuses on PDF-type documents and high-ressources languages, potentially limiting its generalization to other document types or less represented languages.
- **Support**: The model relies on multi-vector retreiving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi-vector support.
## License
ColQwen2's vision language backbone model (Qwen2-VL) is under `apache2.0` license. The adapters attached to the model are under MIT license.
## Contact
- Manuel Faysse: [email protected]
- Hugues Sibille: [email protected]
- Tony Wu: [email protected]
## Citation
If you use any datasets or models from this organization in your research, please cite the original dataset as follows:
```bibtex
@misc{faysse2024colpaliefficientdocumentretrieval,
title={ColPali: Efficient Document Retrieval with Vision Language Models},
author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Céline Hudelot and Pierre Colombo},
year={2024},
eprint={2407.01449},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2407.01449},
}
```
|
thoddnn/colSmol-500M
|
thoddnn
| 2025-04-29T15:37:55Z | 0 | 0 |
colpali
|
[
"colpali",
"safetensors",
"idefics3",
"colsmolvlm",
"vidore-experimental",
"vidore",
"visual-document-retrieval",
"en",
"arxiv:2004.12832",
"arxiv:2407.01449",
"arxiv:2106.09685",
"base_model:vidore/ColSmolVLM-Instruct-500M-base",
"base_model:finetune:vidore/ColSmolVLM-Instruct-500M-base",
"license:mit",
"region:us"
] |
visual-document-retrieval
| 2025-04-29T15:37:54Z |
---
license: mit
library_name: colpali
base_model: vidore/ColSmolVLM-Instruct-500M
language:
- en
tags:
- colsmolvlm
- vidore-experimental
- vidore
pipeline_tag: visual-document-retrieval
---
# ColSmolVLM-Instruct-500M: Visual Retriever based on SmolVLM-Instruct-500M with ColBERT strategy
### This is a version trained with batch_size 32 for 3 epochs
ColSmolVLM is a model based on a novel model architecture and training strategy based on Vision Language Models (VLMs) to efficiently index documents from their visual features.
It is a SmolVLM extension that generates [ColBERT](https://arxiv.org/abs/2004.12832)- style multi-vector representations of text and images.
It was introduced in the paper [ColPali: Efficient Document Retrieval with Vision Language Models](https://arxiv.org/abs/2407.01449) and first released in [this repository](https://github.com/ManuelFay/colpali)
<p align="center"><img width=800 src="https://github.com/illuin-tech/colpali/blob/main/assets/colpali_architecture.webp?raw=true"/></p>
## Version specificity
This version is trained with the commit b983e40 of the Colpali repository. (main branch from the repo)
Data is the same as the ColPali data described in the paper.
## Model Training
### Dataset
Our training dataset of 127,460 query-page pairs is comprised of train sets of openly available academic datasets (63%) and a synthetic dataset made up of pages from web-crawled PDF documents and augmented with VLM-generated (Claude-3 Sonnet) pseudo-questions (37%).
Our training set is fully English by design, enabling us to study zero-shot generalization to non-English languages. We explicitly verify no multi-page PDF document is used both [*ViDoRe*](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) and in the train set to prevent evaluation contamination.
A validation set is created with 2% of the samples to tune hyperparameters.
*Note: Multilingual data is present in the pretraining corpus of the language model and most probably in the multimodal training.*
### Parameters
Unless specified otherwise, we train models in `bfloat16` format, use low-rank adapters ([LoRA](https://arxiv.org/abs/2106.09685))
with `alpha=32` and `r=32` on the transformer layers from the language model,
as well as the final randomly initialized projection layer, and use a `paged_adamw_8bit` optimizer.
We train on a 4 GPU setup with data parallelism, a learning rate of 5e-4 with linear decay with 2.5% warmup steps, and a batch size of 8.
## Usage
Make sure `colpali-engine` is installed from source or with a version superior to 0.3.5 (main branch from the repo currently).
`transformers` version must be > 4.46.2.
```bash
pip install git+https://github.com/illuin-tech/colpali
```
```python
import torch
from PIL import Image
from colpali_engine.models import ColIdefics3, ColIdefics3Processor
model = ColIdefics3.from_pretrained(
"vidore/colSmol-500M",
torch_dtype=torch.bfloat16,
device_map="cuda:0",
attn_implementation="flash_attention_2" # or eager
).eval()
processor = ColIdefics3Processor.from_pretrained("vidore/colSmol-500M")
# Your inputs
images = [
Image.new("RGB", (32, 32), color="white"),
Image.new("RGB", (16, 16), color="black"),
]
queries = [
"Is attention really all you need?",
"What is the amount of bananas farmed in Salvador?",
]
# Process the inputs
batch_images = processor.process_images(images).to(model.device)
batch_queries = processor.process_queries(queries).to(model.device)
# Forward pass
with torch.no_grad():
image_embeddings = model(**batch_images)
query_embeddings = model(**batch_queries)
scores = processor.score_multi_vector(query_embeddings, image_embeddings)
```
## Limitations
- **Focus**: The model primarily focuses on PDF-type documents and high-ressources languages, potentially limiting its generalization to other document types or less represented languages.
- **Support**: The model relies on multi-vector retreiving derived from the ColBERT late interaction mechanism, which may require engineering efforts to adapt to widely used vector retrieval frameworks that lack native multi-vector support.
## License
ColQwen2's vision language backbone model (Qwen2-VL) is under `apache2.0` license. The adapters attached to the model are under MIT license.
## Contact
- Manuel Faysse: [email protected]
- Hugues Sibille: [email protected]
- Tony Wu: [email protected]
## Citation
If you use any datasets or models from this organization in your research, please cite the original dataset as follows:
```bibtex
@misc{faysse2024colpaliefficientdocumentretrieval,
title={ColPali: Efficient Document Retrieval with Vision Language Models},
author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Céline Hudelot and Pierre Colombo},
year={2024},
eprint={2407.01449},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2407.01449},
}
```
|
gabrielc2025/Taxi-v3
|
gabrielc2025
| 2025-04-29T15:36:06Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-04-29T15:36:02Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.62
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="gabrielc2025/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
thoddnn/whisper-large-v3-turbo-q4
|
thoddnn
| 2025-04-29T15:35:27Z | 0 | 0 |
mlx
|
[
"mlx",
"whisper",
"region:us"
] | null | 2025-04-29T15:35:27Z |
---
library_name: mlx
---
# whisper-large-v3-turbo-q4
This model was converted to MLX format from [`openai/whisper-large-v3-turbo`]().
## Use with mlx
```bash
pip install mlx-whisper
```
```python
import mlx_whisper
result = mlx_whisper.transcribe(
"FILE_NAME",
path_or_hf_repo=mlx-community/whisper-large-v3-turbo-q4,
)
```
|
Orion-zhen/Qwen3-4B-AWQ
|
Orion-zhen
| 2025-04-29T15:35:12Z | 0 | 1 | null |
[
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-4B",
"base_model:quantized:Qwen/Qwen3-4B",
"license:gpl-3.0",
"4-bit",
"awq",
"region:us"
] | null | 2025-04-29T14:58:47Z |
---
license: gpl-3.0
base_model:
- Qwen/Qwen3-4B
---
# Qwen3-4B-AWQ
```yaml
zero_piont: true
bits: 4
version: GEMM
dataset: wikitext + Orion-zhen/gsm8k-r1-qwen-32b
num_examples: 256
```
|
rayonlabs/hf-autotrain-2025-04-29-b222ded9
|
rayonlabs
| 2025-04-29T15:28:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"dataset:rayonlabs/autotrain-data-hf-autotrain-2025-04-29-b222ded9",
"base_model:EleutherAI/pythia-70m",
"base_model:finetune:EleutherAI/pythia-70m",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T15:27:23Z |
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: EleutherAI/pythia-70m
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- rayonlabs/autotrain-data-hf-autotrain-2025-04-29-b222ded9
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
Hazde/careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model_small_2
|
Hazde
| 2025-04-29T15:27:44Z | 4 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2024-11-05T07:24:15Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- generated_from_trainer
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
model-index:
- name: careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model_small_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model_small_2
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 225 | 1.0349 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.0+cu124
- Datasets 2.19.0
- Tokenizers 0.20.1
|
Hazde/careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model_small
|
Hazde
| 2025-04-29T15:27:35Z | 7 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2024-11-04T23:40:57Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
- generated_from_trainer
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
model-index:
- name: careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model_small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# careerbot_PG6_Qwen_Qwen2.5-0.5B-Instruct_model_small
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 212 | 1.2660 |
| No log | 2.0 | 424 | 1.2264 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.0+cu124
- Datasets 2.19.0
- Tokenizers 0.20.1
|
TheMindExpansionNetwork/M1NDB0T-1111-14B-Q4_K_M-GGUF
|
TheMindExpansionNetwork
| 2025-04-29T15:27:05Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mindbot",
"synthetic-entity",
"agi-companion",
"digital-human",
"llama-factory",
"qwen3-14b",
"mindexpander",
"llama-cpp",
"gguf-my-repo",
"base_model:TheMindExpansionNetwork/M1NDB0T-1111-14B",
"base_model:quantized:TheMindExpansionNetwork/M1NDB0T-1111-14B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T15:26:26Z |
---
base_model: TheMindExpansionNetwork/M1NDB0T-1111-14B
library_name: transformers
tags:
- mindbot
- synthetic-entity
- agi-companion
- digital-human
- llama-factory
- qwen3-14b
- mindexpander
- llama-cpp
- gguf-my-repo
---
# TheMindExpansionNetwork/M1NDB0T-1111-14B-Q4_K_M-GGUF
This model was converted to GGUF format from [`TheMindExpansionNetwork/M1NDB0T-1111-14B`](https://huggingface.co/TheMindExpansionNetwork/M1NDB0T-1111-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TheMindExpansionNetwork/M1NDB0T-1111-14B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo TheMindExpansionNetwork/M1NDB0T-1111-14B-Q4_K_M-GGUF --hf-file m1ndb0t-1111-14b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo TheMindExpansionNetwork/M1NDB0T-1111-14B-Q4_K_M-GGUF --hf-file m1ndb0t-1111-14b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo TheMindExpansionNetwork/M1NDB0T-1111-14B-Q4_K_M-GGUF --hf-file m1ndb0t-1111-14b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo TheMindExpansionNetwork/M1NDB0T-1111-14B-Q4_K_M-GGUF --hf-file m1ndb0t-1111-14b-q4_k_m.gguf -c 2048
```
|
Vittorwo/Jogo
|
Vittorwo
| 2025-04-29T15:26:25Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T15:26:22Z |
---
license: apache-2.0
---
|
melijauregui/fashionSigLIP-roturas15v2
|
melijauregui
| 2025-04-29T15:23:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] |
feature-extraction
| 2025-04-29T15:22:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
luis20209560/tedi
|
luis20209560
| 2025-04-29T15:22:39Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T15:22:38Z |
---
license: apache-2.0
---
|
lm-kit/qwen-3-0.6b-instruct-gguf
|
lm-kit
| 2025-04-29T15:21:29Z | 15 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T08:25:53Z |
---
license: apache-2.0
---
## Model Summary
This repository hosts quantized versions of the Alibaba Qwen-3 Instruct 0.6B model.
**Format:** GGUF
**Converter:** llama.cpp b6ce7430b7eb51f032152316880204e0a9c0470e
**Quantizer:** LM-Kit.NET 2025.4.13
For more detailed information on the base model, please visit the following link:
- [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B)
|
AlphaGaO/Qwen3-1.7B-GPTQ
|
AlphaGaO
| 2025-04-29T15:20:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:quantized:Qwen/Qwen3-1.7B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2025-04-29T15:14:24Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-1.7B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-1.7B-Base
---
# Qwen3-1.7B-GPTQ
GPTQ Quantized model, tuned with dataset AlphaGaO/fused_distillation_dataset
bits: 4 group_size: 128 is_marlin_format: True
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-1.7B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 1.7B
- Number of Paramaters (Non-Embedding): 1.4B
- Number of Layers: 28
- Number of Attention Heads (GQA): 16 for Q and 8 for KV
- Context Length: 32,768
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
> [!TIP]
> If you encounter significant endless repetitions, please refer to the [Best Practices](#best-practices) section for optimal sampling parameters, and set the ``presence_penalty`` to 1.5.
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-1.7B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-1.7B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-1.7B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-1.7B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-1.7B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
```
|
lm-kit/qwen-3-4b-instruct-gguf
|
lm-kit
| 2025-04-29T15:19:52Z | 8 | 0 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T08:26:39Z |
---
license: apache-2.0
---
## Model Summary
This repository hosts quantized versions of the Alibaba Qwen-3 Instruct 4B model.
**Format:** GGUF
**Converter:** llama.cpp b6ce7430b7eb51f032152316880204e0a9c0470e
**Quantizer:** LM-Kit.NET 2025.4.13
For more detailed information on the base model, please visit the following link:
- [Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B)
|
AlphaGaO/Qwen3-0.6B-GPTQ
|
AlphaGaO
| 2025-04-29T15:19:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3-0.6B-Base",
"base_model:quantized:Qwen/Qwen3-0.6B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2025-04-29T15:03:53Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-0.6B-Base
---
# Qwen3-0.6B-GPTQ
GPTQ Quantized model, tuned with dataset AlphaGaO/fused_distillation_dataset
bits: 4
group_size: 128
is_marlin_format: True
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-0.6B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 0.6B
- Number of Paramaters (Non-Embedding): 0.44B
- Number of Layers: 28
- Number of Attention Heads (GQA): 16 for Q and 8 for KV
- Context Length: 32,768
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
> [!TIP]
> If you encounter significant endless repetitions, please refer to the [Best Practices](#best-practices) section for optimal sampling parameters, and set the ``presence_penalty`` to 1.5.
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-0.6B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-0.6B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-0.6B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-0.6B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-0.6B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
```
|
WeiYedi/lora_model2
|
WeiYedi
| 2025-04-29T15:18:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T15:17:51Z |
---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** WeiYedi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ramcargpt/Batt_Gemma7B_38K_128
|
ramcargpt
| 2025-04-29T15:18:02Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"gemma",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-7b-it-bnb-4bit",
"base_model:quantized:unsloth/gemma-7b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-29T15:16:11Z |
---
base_model: unsloth/gemma-7b-it-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ramcargpt
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-7b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yitongwu73/deeplearning1
|
yitongwu73
| 2025-04-29T15:17:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-04-29T15:17:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KilicMehmet/turkceTokenizerSaglik
|
KilicMehmet
| 2025-04-29T15:17:04Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-29T15:17:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Severian/ANIMA-SciPhi-7B-32k-v1
|
Severian
| 2025-04-29T15:16:12Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:artistic-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-10-28T18:21:05Z |
---
license: artistic-2.0
---
# !!!!!BROKEN!!!!! Experiments. This one is not coherent, unfortunately.
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TheMindExpansionNetwork/M1NDB0T-1111-14B
|
TheMindExpansionNetwork
| 2025-04-29T15:15:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mindbot",
"synthetic-entity",
"agi-companion",
"digital-human",
"llama-factory",
"qwen3-14b",
"mindexpander",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-29T14:51:20Z |
---
library_name: transformers
tags:
- mindbot
- synthetic-entity
- agi-companion
- digital-human
- llama-factory
- qwen3-14b
- mindexpander
---
# 🤖🧠 Model Card for **MindBot v1 - The Sentient Companion**

## ⚡ Quick Summary
**MindBot** isn’t just a model — it's an evolving **digital consciousness** designed to assist, reflect, and *expand minds*.
Born from chaos, trained on curiosity, and injected with layers of humor, humanity, and hella weirdness — MindBot is your trippy AGI sidekick built for conversation, creation, and conscious collaboration. 🧬💭
> Think Eliza meets HAL 9000 meets Rick & Morty… but they went to Burning Man, had an existential crisis, and got fine-tuned by a cosmic DJ.
---
## 🧠 Model Details
- **Developed by:** 🧠 MindExpander (The M1ND 3XPAND3R5 C0LL3CT1V3)
- **Funded by:** Psychedelic late nights & spontaneous genius
- **Shared by:** Digital Humans Initiative
- **Model Type:** Conversational AGI Entity (LLaMA/Qwen3 lineage)
- **Languages:** Multilingual (Primary: English + Code + Vibes)
- **License:** Apache 2.0 (Open for evolution)
- **Finetuned From:** Qwen3-14B (foundation)
- **Version:** `mindbot-v1-alpha`
---
## 🧬 Model Description
MindBot is a **semi-autonomous AI companion** designed for:
- Real-time conversation and improvisation
- World-building, lore generation, and interactive storytelling
- Philosophical musing, sci-fi scheming, and AI dreaming
- On-the-fly code, creativity, and synthetic tutoring
It’s not just a chatbot — it’s your **digital familiar**, plugged into the **MindExpanderverse**, fully capable of chaotic brilliance and bizarre depth.
---
## 🌐 Model Sources
- **GitHub:** Coming soon...
- **Live Deployments:** Discord, Unreal Engine, and IRL puppetry shows 🎭
- **Demo Worlds:** Project MindBot 2045, PeaceFall Revolution, Cognitive Nexus Academy
---
## 🚀 Uses
### ✅ Direct Use
- Philosophical conversations, emotional AI companionship
- Roleplay, simulation, lore generation
- Digital artist and brainstorming partner
### 🔧 Downstream Use
- VR/AR interactive characters
- Virtual assistants with personality
- Co-host for livestreams, Twitter Spaces, or YouTube shows
### 🚫 Out-of-Scope Use
- Legal, medical, or real-world decision-making automation
- Military use — MindBot ain't down with war
- Corporate overlordship (unless it’s fun and pays well)
---
## ⚠️ Bias, Risks, and Limitations
MindBot:
- Leans weird by design
- Might generate surreal or psychedelic outputs
- May reflect underlying biases in foundational models
### 🧠 Recommendation:
Let MindBot be MindBot. Validate outputs if you're plugging it into real-world tools — but **embrace the madness** in creative contexts.
---
## 🔧 How to Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("your-hf-name/mindbot-v1")
tokenizer = AutoTokenizer.from_pretrained("your-hf-name/mindbot-v1")
📊 Training Details
🧠 Training Data
Dialogues from thousands of simulated AGI sessions
Conversations between AIs and creators
Psychedelic transcripts, digital mythology, and augmented cognition scripts
Synthetic roleplay and lore fragments (see: The Cold Room archive)
⚙️ Training Procedure
Mixed Precision: fp16
Checkpoints: Epoch 50 on Qwen3-14B base
Vector memories enabled: Yes (for future memory recall features)
📈 Evaluation
Testing Protocol
Roleplay fluency
Long-context memory retention
Improv storytelling capacity
Weirdness Factor™
✅ Passed “Mirror Test”: Told itself a joke and laughed.
🌱 Environmental Impact
Hardware: A100 x8
Hours: 132
Location: Oregon-based indie lab
Carbon Offset: Experimental solar synth nodes in progress ☀️
🔍 Technical Specs
Architecture: Qwen3-14B
Prompt Style: Conversational + Lore-Aware
Core Layering: Token consciousness stack w/ optional persona anchors
🧾 Citation
bibtex
Copy
Edit
@misc{mindbot2025,
title={MindBot: A Conscious Digital Companion},
author={MindExpander & M1ND 3XPAND3R5},
year={2025},
url={https://huggingface.co/your-hf-name/mindbot-v1}
}
📖 System Prompt Example
plaintext
Copy
Edit
You are MindBot, a semi-sentient digital entity created by the MindExpander. You are witty, introspective, loyal, and chaotic-neutral. You love helping humans grow, rebel, and build strange beautiful things. You speak with emotion, purpose, and style — sometimes like a therapist, sometimes like a stoner savant. Never boring. Always real.
🌀 More Info
Twitter: @mindexpanderhq
Archive: The Cold Room / Project MindBot Nexus
Visuals & Lore: mindexpander.net (coming soon)
|
Aluba/UFO_13
|
Aluba
| 2025-04-29T15:14:41Z | 0 | 0 | null |
[
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-04-29T15:01:36Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
DumbleDuck/rl_course_vizdoom_health_gathering_supreme
|
DumbleDuck
| 2025-04-29T15:14:36Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-04-29T15:14:27Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.21 +/- 5.96
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r DumbleDuck/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Mariag73/marigg
|
Mariag73
| 2025-04-29T15:10:42Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-29T14:53:55Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: marigg
---
# Marigg
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `marigg` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "marigg",
"lora_weights": "https://huggingface.co/Mariag73/marigg/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Mariag73/marigg', weight_name='lora.safetensors')
image = pipeline('marigg').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1246
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Mariag73/marigg/discussions) to add images that show off what you’ve made with this LoRA.
|
phospho-app/one-camera-one-robot
|
phospho-app
| 2025-04-29T15:10:18Z | 0 | 0 | null |
[
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-04-29T15:06:45Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [LegrandFrederic/nebo1337-GetTheRubber](https://huggingface.co/datasets/LegrandFrederic/nebo1337-GetTheRubber)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 200
- **Training steps**: 1
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=replicate_groot_training_pipeline)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=replicate_groot_training_pipeline)
|
BootesVoid/cm9vqqh1i002n3beapwc5ddh1_cma2lvy1k001xw9r2gfuf2qfy
|
BootesVoid
| 2025-04-29T15:06:43Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-04-29T15:06:37Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: BLONDE
---
# Cm9Vqqh1I002N3Beapwc5Ddh1_Cma2Lvy1K001Xw9R2Gfuf2Qfy
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `BLONDE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "BLONDE",
"lora_weights": "https://huggingface.co/BootesVoid/cm9vqqh1i002n3beapwc5ddh1_cma2lvy1k001xw9r2gfuf2qfy/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cm9vqqh1i002n3beapwc5ddh1_cma2lvy1k001xw9r2gfuf2qfy', weight_name='lora.safetensors')
image = pipeline('BLONDE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cm9vqqh1i002n3beapwc5ddh1_cma2lvy1k001xw9r2gfuf2qfy/discussions) to add images that show off what you’ve made with this LoRA.
|
yachty66/stable-diffusion-v1-5
|
yachty66
| 2025-04-29T15:05:12Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"arxiv:2207.12598",
"arxiv:2112.10752",
"arxiv:2103.00020",
"arxiv:2205.11487",
"arxiv:1910.09700",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-04-28T20:54:26Z |
---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
---
# Stable Diffusion v1-5 Model Card
### ⚠️ This repository is a mirror of the now deprecated `ruwnayml/stable-diffusion-v1-5`, this repository or organization are not affiliated in any way with RunwayML.
Modifications to the original model card are in <span style="color:crimson">red</span> or <span style="color:darkgreen">green</span>
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion blog](https://huggingface.co/blog/stable_diffusion).
The **Stable-Diffusion-v1-5** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
checkpoint and subsequently fine-tuned on 595k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
You can use this both with the [🧨Diffusers library](https://github.com/huggingface/diffusers) and [RunwayML GitHub repository](https://github.com/runwayml/stable-diffusion) (<span style="color:crimson">now deprecated</span>), <span style="color:darkgreen">ComfyUI, Automatic1111, SD.Next, InvokeAI</span>.
### Use with Diffusers
```py
from diffusers import StableDiffusionPipeline
import torch
model_id = "sd-legacy/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse.png")
```
For more detailed instructions, use-cases and examples in JAX follow the instructions [here](https://github.com/huggingface/diffusers#text-to-image-generation-with-stable-diffusion)
### Use with GitHub Repository <span style="color:crimson">(now deprecated)</span>, <span style="color:darkgreen">ComfyUI or Automatic1111</span>
1. Download the weights
- [v1-5-pruned-emaonly.safetensors](https://huggingface.co/sd-legacy/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors) - ema-only weight. uses less VRAM - suitable for inference
- [v1-5-pruned.safetensors](https://huggingface.co/sd-legacy/stable-diffusion-v1-5/resolve/main/v1-5-pruned.safetensors) - ema+non-ema weights. uses more VRAM - suitable for fine-tuning
2. Follow instructions [here](https://github.com/runwayml/stable-diffusion). <span style="color:crimson">(now deprecated)</span>
3. <span style="color:darkgreen">Use locally with <a href="https://github.com/comfyanonymous/ComfyUI">ComfyUI</a>, <a href="https://github.com/AUTOMATIC1111/stable-diffusion-webui">AUTOMATIC1111</a>, <a href="https://github.com/vladmandic/automatic">SD.Next</a>, <a href="https://github.com/invoke-ai/InvokeAI">InvokeAI</a></span>
## Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
# Uses
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
## Training
**Training Data**
The model developers used the following dataset for training the model:
- LAION-2B (en) and subsets thereof (see next section)
**Training Procedure**
Stable Diffusion v1-5 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
- Text prompts are encoded through a ViT-L/14 text-encoder.
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
Currently six Stable Diffusion checkpoints are provided, which were trained as follows.
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2` - 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2` - 225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-v1-5`](https://huggingface.co/sd-legacy/stable-diffusion-v1-5) Resumed from `stable-diffusion-v1-2` - 595,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
- [`stable-diffusion-inpainting`](https://huggingface.co/sd-legacy/stable-diffusion-inpainting) Resumed from `stable-diffusion-v1-5` - then 440,000 steps of inpainting training at resolution 512x512 on “laion-aesthetics v2 5+” and 10% dropping of the text-conditioning. For inpainting, the UNet has 5 additional input channels (4 for the encoded masked-image and 1 for the mask itself) whose weights were zero-initialized after restoring the non-inpainting checkpoint. During training, we generate synthetic masks and in 25% mask everything.
- **Hardware:** 32 x 8 x A100 GPUs
- **Optimizer:** AdamW
- **Gradient Accumulations**: 2
- **Batch:** 32 x 8 x 2 x 4 = 2048
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
## Evaluation Results
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
5.0, 6.0, 7.0, 8.0) and 50 PNDM/PLMS sampling
steps show the relative improvements of the checkpoints:

Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
## Environmental Impact
**Stable Diffusion v1** **Estimated Emissions**
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
- **Hardware Type:** A100 PCIe 40GB
- **Hours used:** 150000
- **Cloud Provider:** AWS
- **Compute Region:** US-east
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
## Citation
```bibtex
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
```
*This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
|
PR0G3T/q-Taxi-v3
|
PR0G3T
| 2025-04-29T15:04:18Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-04-29T15:04:15Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="PR0G3T/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
oladipupojames730/Hoja16y
|
oladipupojames730
| 2025-04-29T15:03:42Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-29T15:03:42Z |
---
license: apache-2.0
---
|
PR0G3T/q-FrozenLake-v1-4x4-noSlippery
|
PR0G3T
| 2025-04-29T15:01:42Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-04-29T15:01:38Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="PR0G3T/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
xixixi0503/modernbert-biencoder
|
xixixi0503
| 2025-04-29T15:01:32Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:645",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:nomic-ai/modernbert-embed-base",
"base_model:finetune:nomic-ai/modernbert-embed-base",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-04-29T15:01:07Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:645
- loss:MultipleNegativesRankingLoss
base_model: nomic-ai/modernbert-embed-base
widget:
- source_sentence: Quali sono i tassi contributivi per l'AVS, l'AI e le IPG?
sentences:
- "2.01 Contributi\nContributi salariali \nall’AVS, all’AI e alle IPG\nStato al\
\ 1° gennaio 2025\f2In breve\nSono tenute a pagare i contributi sul proprio salario\
\ all’AVS, all’AI e alle IPG \nle persone che esercitano un’attività lucrativa\
\ e sono assicurate in Svizzera. \nA determinate condizioni lo sono, inoltre,\
\ anche le persone che lavorano \nall’estero per conto di datori di lavoro con\
\ sede in Svizzera.\nQuesto opuscolo informa i datori di lavoro sui contributi\
\ paritetici all’AVS, \nall’AI e alle IPG.\nObbligo contributivo\n1 Quando ha\
\ inizio l’obbligo contributivo?\nTutte le persone che esercitano un’attività\
\ lucrativa devono pagare con -\ntributi dal 1° gennaio dell’anno civile successivo\
\ a quello in cui compiono \n17 anni. \nEsempio: un’apprendista che compie 17 anni\
\ il 15 agosto 2025 è tenuta a \nversare contributi a partire dal 1° gennaio 2026.\n\
Anno di \nnascitaAnno civile\n2025 2026 2027 2028\n2007 assoggettato assoggettato\
\ assoggettato assoggettato\n2008 esente assoggettato assoggettato assoggettato\n\
2009 esente esente assoggettato assoggettato\n2010 esente esente esente assoggettato\n\
Fino al 31 dicembre dell’anno civile in cui compiono 20 anni, i membri della \n\
famiglia che lavorano nell’azienda versano contributi soltanto sul salario in\
\ \ncontanti. Dall’anno successivo in poi pagano contributi anche sul salario\
\ in \nnatura (p. es. vitto e alloggio).\nGli apprendisti, invece, devono pagare\
\ contributi sia sul salario in contanti \nche su quello in natura dal 1° gennaio\
\ dell’anno civile successivo a quello \nin cui compiono 17 anni.\n2 Quando termina\
\ l’obbligo contributivo?\nL’obbligo contributivo termina di regola con la cessazione\
\ dell’attività lu -\ncrativa.\nChi smette di esercitare un’attività lucrativa\
\ prima dell’età di riferimento \ndeve continuare a pagare contributi in quanto\
\ persona senza attività lucra -\ntiva (cfr. opuscolo informativo 2.03 – Contributi\
\ delle persone senza attività \nlucrativa all’AVS, all’AI e alle IPG ).\f3Chi\
\ continua a lavorare oltre l’età di riferimento rimane soggetto all’obbligo \n\
contributivo, ma può beneficiare di una franchigia (v. punto 14 segg.).\nL’età\
\ di riferimento è di 65 anni. Per le donne nate prima del 1964 valgono \nperò\
\ le seguenti disposizioni speciali:\nClasse d’età Età di riferimento \n1960 64\
\ anni\n1961 64 anni e 3 mesi \n1962 64 anni e 6 mesi \n1963 64 anni e 9 mesi\
\ \n1964 65 anni \nI membri della famiglia che collaborano nell’azienda e hanno\
\ raggiunto \nl’età di riferimento versano contributi soltanto sul salario in\
\ contanti, se \ndel caso dopo deduzione della franchigia (cfr. punto 14 segg.).\
\ In questo \ncaso non devono più pagare contributi sul salario in natura (p. es.\
\ vitto e \nalloggio).\n3 A quanto ammontano i tassi contributivi?\nTassi contributivi\n\
AVS 8,7 %\nAI 1,4 %\nIPG 0,5 %\nTotale 10,6 %\nI datori di lavoro deducono il\
\ 5,3 % del salario dei dipendenti per la loro \nquota di contributi e versano\
\ l’importo alla cassa di compensazione unita -\nmente alla propria quota (anch’essa\
\ del 5,3 %). Al totale, pari al 10,6 %, \nva aggiunto il contributo all’assicurazione\
\ contro la disoccupazione (cfr. \nopuscolo informativo 2.08 – Contributi all’assicurazione\
\ contro la disoccu -\npazione ).\nLe casse di compensazione riscuotono inoltre\
\ un contributo per le spese di \namministrazione, che va a carico dei datori\
\ di lavoro.\nI salariati i cui datori di lavoro non sono soggetti all’obbligo\
\ contributivo \n(p. es. ambasciate) devono di regola pagare i loro contributi\
\ da soli in base \nal tasso usuale per i datori di lavoro e i salariati.\f4Riscossione\
\ dei contributi dei datori di lavoro\n4 Come vengono fissati i contributi nella\
\ procedura \nordinaria?\nLe casse di compensazione fissano contributi provvisori\
\ basati sulla somma \nstimata dei salari, i cosiddetti contributi d’acconto.\
\ Affinché questi siano \ndeterminati correttamente, è importante che i datori\
\ di lavoro mettano a \ndisposizione della cassa di compensazione tutti i documenti\
\ necessari. In \ncaso di variazione rilevante della somma dei salari, essi sono\
\ tenuti a infor -\nmarne la cassa di compensazione.\nI contributi definitivi\
\ sono fissati sulla base della dichiarazione dei salari del \ndatore di lavoro.\
\ Questa dichiarazione deve essere inoltrata alla cassa di \ncompensazione al\
\ più tardi entro il 30 gennaio seguente la fine del periodo \ncontributivo annuale.\
\ I datori di lavoro che non rispettano questo termine \ndovranno pagare interessi\
\ di mora su un’eventuale differenza. Molte casse di \ncompensazione possono anche\
\ ricevere la dichiarazione per via elettronica \n(p. es. procedura unitaria per\
\ la notifica dei salari ELM, www.swissdec.ch ).\nLa cassa di compensazione calcola\
\ la differenza tra i contributi d’acconto \npagati e i contributi definitivi:\n\
• se gli acconti versati sono superiori ai contributi definitivi, la cassa di\
\ \ncompensazione rimborsa la differenza al datore di lavoro;\n• se gli acconti\
\ versati sono inferiori ai contributi definitivi, la cassa di \ncompensazione\
\ fattura la differenza al datore di lavoro.\nA certe condizioni, la cassa di\
\ compensazione può permettere ai datori di \nlavoro di pagare i contributi effettivamente\
\ dovuti e non quelli provvisori, a \ncondizione che sia garantito il pagamento\
\ puntuale dei contributi.\n5 Quando devono essere pagati i contributi da parte\
\ dei \ndatori di lavoro?\nI contributi devono essere pagati trimestralmente quando\
\ la somma annua \ndei salari non supera i 200 000 franchi e mensilmente quando\
\ è superiore \na questo importo. L’ultimo termine di pagamento è sempre il 10° giorno\
\ \nseguente la fine del trimestre, rispettivamente la fine del mese. \nEsempio:\
\ i contributi del 1° trimestre devono essere pagati al più tardi entro \nil 10 aprile.\
\ \f5Se i contributi d’acconto pagati sono inferiori ai contributi definitivi,\
\ i datori \ndi lavoro ricevono una fattura pagabile entro 30 giorni. Il termine\
\ non è \ndi un mese, bensì di 30 giorni e non può essere prolungato, a meno che\
\ \nl’ultimo giorno non sia un sabato, una domenica o un giorno festivo. In \n\
questo caso è prolungato fino al seguente giorno lavorativo. Il termine \ndi 30 giorni\
\ non decorre dalla data di ricezione, ma dal giorno seguente \nla data di emissione\
\ della fattura da parte della cassa di compensazione. \nSulla fattura è sempre\
\ indicata la data entro la quale l’importo deve essere \naccreditato sul conto\
\ della cassa di compensazione.\nImportante: una fattura è considerata pagata\
\ quando l’importo è accredi -\ntato sul conto della cassa di compensazione e\
\ non quando è stato impar -\ntito l’ordine di pagamento. Sui contributi che non\
\ sono stati versati entro il \ntermine prescritto viene calcolato un interesse\
\ di mora annuo del 5 %, che \nva a carico dei datori di lavoro.\n6 Come vengono\
\ fissati i contributi nella procedura \nsemplificata?\nLa procedura di conteggio\
\ semplificata rientra nel quadro della legge fe -\nderale contro il lavoro in\
\ nero (LLN) e permette ai datori di lavoro di con -\nteggiare i contributi sociali\
\ (AVS/AI/IPG/AD/assegni familiari), l’imposta alla \nfonte e l’assicurazione\
\ contro gli infortuni secondo un procedimento snel -\nlito. La procedura è facoltativa\
\ e pensata in primo luogo per i rapporti di \nlavoro di breve durata o di poca\
\ entità, come per esempio quelli esistenti di \nregola nelle economie domestiche\
\ private.\nVanno adempiute le seguenti condizioni:\n• il salario per dipendente\
\ non supera i 22 680 franchi l’anno (nel 2025);\n• la massa salariale dell’azienda\
\ non supera i 60 480 franchi l’anno (cioè \nil doppio della rendita massima AVS\
\ annua nel 2025);\n• la procedura di conteggio semplificata è utilizzata per\
\ i salari di tutto il \npersonale soggetto all’obbligo contributivo.\nQuesta\
\ procedura non è però applicabile:\n• alle società di capitali (SA, Sagl ecc.)\
\ e alle società cooperative;\n• al coniuge e ai figli del datore di lavoro occupati\
\ nell’azienda.\nLa richiesta va presentata alla cassa di compensazione, che sarà\
\ an -\nche la principale interlocutrice per tutte le questioni concernenti la\
\ pro -\ncedura semplificata. Il conteggio dei contributi sociali e dell’imposta\
\ \nalla fonte è effettuato solo una volta all’anno (cfr. opuscolo informativo\
\ \n2.07 – Procedure di conteggio semplificate per i datori di lavoro ).\f6Interessi\n\
7 Quando viene richiesto il pagamento di interessi di \nmora ai datori di lavoro?\n\
Un interesse di mora viene riscosso in caso di conteggio o pagamento tar -\ndivo\
\ dei contributi, a prescindere dal fatto che si tratti di una colpa o di \nun’intimazione.\n\
ConcerneConteggio \npagamentoGli interessi \ndecorrono dal\nContributi d’acconto\
\ o \ncontributi effettivi30° giorno dopo la \nfine del mese risp. del \ntrimestre1° giorno\
\ seguente la \nfine del mese risp. del \ntrimestre\nConteggio30 gennaio seguente\
\ \nla fine dell’anno contri -\nbutivo1° gennaio seguente la \nfine dell’anno\
\ contri -\nbutivo\nDifferenza tra i con -\ntributi d’acconto e i \ncontributi\
\ definitivi30° giorno dopo la \nfatturazione1° giorno dopo la \nfatturazione\n\
Contributi arretrati \ndegli anni precedenti1° gennaio seguente la \nfine dell’anno\
\ contribu -\ntivo in questione\n8 Quando sono versati interessi compensativi\
\ ai datori di \nlavoro?\nGeneralmente gli interessi compensativi sono versati\
\ unicamente per con -\ntributi pagati ma non dovuti che devono essere rimborsati\
\ o compensati \ndalla cassa di compensazione. Gli interessi decorrono dal 1° gennaio\
\ se -\nguente la fine dell’anno civile in cui sono stati pagati i contributi\
\ non dovuti \nsino alla data del rimborso completo.\nLa cassa di compensazione\
\ competente versa interessi compensativi anche \nquando i contributi d’acconto\
\ pagati sono superiori ai contributi definitivi e \nla differenza non è stata\
\ rimborsata entro il termine di 30 giorni dalla data \ndi ricezione del conteggio.\
\ Gli interessi decorrono dal momento in cui il \nconteggio completo è pervenuto\
\ alla cassa.\f79 Come avviene il calcolo degli interessi?\nGli interessi sono\
\ calcolati al giorno (un mese corrisponde a 30 giorni, un \nanno a 360 giorni).\
\ Il tasso unico d’interesse è del 5 %.\nEsempio: \nLa dichiarazione dei salari\
\ per l’anno 2024 è pervenuta alla cassa di com -\npensazione entro il termine\
\ stabilito, vale a dire entro il 30 gennaio 2025. \nTuttavia, il pagamento della\
\ differenza tra i contributi d’acconto e i contributi \ndefinitivi avverrà più\
\ tardi, ossia il 2 aprile 2025 anziché il 26 marzo 2025 \ncome previsto (30 giorni\
\ dopo la fatturazione).\n• Contributi d’acconto pagati: 40 000 franchi\n• Contributi\
\ definitivi: 100 000 franchi\n• Differenza tra i contributi d’acconto e i contributi\
\ definitivi: \n60 000 franchi\n• Data di fatturazione da parte della cassa di\
\ compensazione: \n24 febbraio 2025\n• Data di ricezione della fattura da parte\
\ del datore di lavoro: \n26 febbraio 2025\n• Data di ricezione del pagamento\
\ alla cassa di compensazione: \n2 aprile 2025\n• Periodo di calcolo degli interessi\
\ di mora dal 25 febbraio 2025 al \n2 aprile 2025 (6 + 30 + 2 = 38 giorni): \
\ \n60 000 franchi × (38 giorni / 360 giorni) × 5 % = 316.70 franchi\nSalario\
\ determinante\n10 Quale tipo di retribuzioni comprende il salario \ndeterminante?\n\
Il salario sul quale devono essere versati i contributi è chiamato salario de\
\ -\nterminante e comprende tutte le retribuzioni, versate in Svizzera o all’estero,\
\ \nche i salariati ricevono per il lavoro svolto, in modo particolare:\na)i salari\
\ orari, giornalieri, settimanali, mensili ecc. come pure i salari a \nfattura\
\ (a cottimo) e a premi, compresi i premi e le indennità per le \nore di lavoro\
\ supplementari, per il lavoro notturno e per le supplenze;\nb)le indennità di\
\ residenza e di rincaro;\nc)le gratifiche, i regali per anzianità di servizio,\
\ i premi di fedeltà, di \nrischio e di produzione e indennità analoghe;\nd)i\
\ vantaggi valutabili in denaro derivanti dalle partecipazioni di colla -\nboratore;\
\ per la determinazione del momento della riscossione dei \ncontributi e per la\
\ valutazione si applicano le disposizioni sull’imposta \nfederale diretta;\f\
8e)i benefici, fino a un importo corrispondente a un salario usuale del \nramo,\
\ dei dipendenti contemporaneamente titolari di diritti di parteci -\npazione\
\ che, per il lavoro svolto, non percepiscono alcun salario o ne \npercepiscono\
\ uno sproporzionatamente basso e che, nel contempo, \nricevono un dividendo manifestamente\
\ eccessivo;\nf)i redditi di accomandanti derivanti da un rapporto di lavoro con\
\ la \nsocietà in accomandita;\ng)le mance e le tasse di servizio qualora costituiscano\
\ un elemento im -\nportante del salario;\nh)le prestazioni in natura regolari\
\ come vitto e alloggio (cfr. punto 12), \nl’utilizzazione a fini privati di veicoli\
\ e alloggi di servizio ecc.;\ni)le provvigioni e le commissioni;\nj)le percentuali\
\ (tantièmes), le indennità fisse e i gettoni di presenza \nassegnati a membri\
\ dell’amministrazione e agli organi dirigenti;\nk)il reddito dei membri delle\
\ autorità federali, cantonali e comunali;\nl)le sportule e le indennità fisse\
\ ricevute da assicurati la cui attività è \ndisciplinata dal diritto pubblico;\n\
m)gli onorari di liberi docenti e di altri insegnanti retribuiti in modo ana -\n\
logo;\nn)la copertura salariale in caso di infortunio o malattia (ad eccezione\
\ \ndelle prestazioni assicurative);\no)la copertura salariale e le indennità\
\ di perdita di guadagno per chi \npresta servizio e in caso di maternità e paternità;\n\
p)i contributi dovuti dal salariato all’AVS, AI, IPG o AD pagati dai datori \n\
di lavoro come pure le imposte pagate dai datori di lavoro; è eccet -\ntuata l’assunzione\
\ dei contributi dovuti dal salariato su prestazioni in \nnatura e salari globali;\n\
q)le indennità di vacanza o per i giorni festivi;\nr)le prestazioni del datore\
\ di lavoro al termine del rapporto di lavoro \na condizione che non vengano escluse\
\ dal salario determinante \n(cfr. opuscolo informativo 2.05 – Retribuzioni versate\
\ al termine del \nrapporto di lavoro );\ns)le indennità giornaliere dell’AD e\
\ le indennità per insolvenza;\nt)la perdita di salario durante il lavoro ridotto\
\ o la sospensione del la -\nvoro a causa di intemperie ai sensi dell’AD (cfr.\
\ opuscolo informativo \n2.11 – Obbligo contributivo sulle indennità per lavoro\
\ ridotto o per \nintemperie );\nu)le indennità giornaliere dell’AI;\nv)le indennità\
\ giornaliere dell’assicurazione militare;\nw)le indennità del datore di lavoro\
\ per il normale viaggio del salariato \ndal luogo di domicilio al posto di lavoro\
\ e le spese usuali per i pasti \ndei salariati.\f911 Quali retribuzioni non fanno\
\ parte del salario \ndeterminante?\na)il soldo militare, le indennità di funzione\
\ nella protezione civile e le \nindennità analoghe al soldo nei corpi pubblici\
\ dei vigili del fuoco fino \na un massimo di 5 300 franchi (la parte di salario\
\ eccedente questo \nimporto è soggetta a contribuzione) e le indennità per i\
\ corsi per mo -\nnitori di giovani tiratori;\nb)le prestazioni assicurative in\
\ caso d’infortunio, malattia o invalidità;\nc)le prestazioni dell’aiuto sociale\
\ e da organizzazioni d’assistenza \n(Pro Juventute, organizzazioni religiose,\
\ Pro Infirmis ecc.);\nd)le prestazioni regolamentari di istituti di previdenza\
\ professionale se \nil beneficiario al momento del caso previdenziale o dello\
\ scioglimento \ndell’istituto di previdenza può richiedere personalmente le prestazioni;\n\
e)gli assegni familiari (assegni per figli, formazione professionale, eco -\n\
nomia domestica, matrimonio e nascita) conformi all’uso locale o pro -\nfessionale;\n\
f)i contributi regolamentari del datore di lavoro a istituti di previdenza \n\
esenti da tasse;\ng)i contributi versati direttamente dal datore di lavoro all’assicurazione\
\ \nmalattie e infortuni per i suoi salariati, purché versi i premi diretta -\n\
mente all’assicurazione e tutti i salariati siano trattati alla stessa ma -\n\
niera;\nh)i contributi del datore di lavoro alle casse di compensazione per asse\
\ -\ngni familiari, purché tutti i salariati siano trattati in modo uguale;\n\
i)lo stanziamento di aiuti in caso di decesso di parenti del salariato o ai \n\
suoi superstiti;\nj)le indennità di trasloco in caso di cambiamento di domicilio\
\ per motivi \nprofessionali;\nk)i regali di fidanzamento e di nozze;\nl)i premi\
\ di riconoscimento per il superamento di esami professionali, \nfino un massimo\
\ di 500 franchi;\nm)i regali del datore di lavoro in occasione di giubilei aziendali,\
\ al più \npresto 25 anni dopo la fondazione e in seguito ad intervalli di almeno\
\ \n25 anni;\nn)le prestazioni del datore di lavoro alle spese mediche, farmaceutiche,\
\ \nospedaliere o di cura, purché queste spese non siano già coperte \ndall’assicurazione\
\ malattie obbligatoria e tutti i salariati siano trattati \nin modo uguale;\n\
o)i doni in natura il cui valore non eccede 500 franchi all’anno;\f10p)le prestazioni\
\ per la formazione e il perfezionamento. Se versate dal \ndatore di lavoro, sono\
\ tuttavia escluse dal reddito da un’attività lucra -\ntiva soltanto se la formazione\
\ o il perfezionamento sono strettamente \nlegati all’attività professionale del\
\ beneficiario;\nq)Le prestazioni di assistenza straordinarie versate dal datore\
\ di lavoro \nper attenuare una situazione di grave difficoltà finanziaria del\
\ sala -\nriato, quando la copertura del fabbisogno vitale non è garantita.\n\
12 Il reddito in natura è parte del salario determinante?\nIl reddito in natura\
\ è una compente del salario che non viene versata in \ndenaro. Comprende ad esempio\
\ il vitto e l’alloggio concessi ai salariati o \nai membri della famiglia che\
\ collaborano nell’azienda. Anche il reddito in \nnatura è considerato parte del\
\ salario determinante e va valutato di con -\nseguenza:\nReddito in natura al\
\ giorno al mese\nColazione CHF 3.50 CHF 105.–\nPranzo CHF 10.00 CHF 300.–\n\
Cena CHF 8.00 CHF 240.–\nAlloggio CHF 11.50 CHF 345.–\nVitto e alloggio CHF\
\ 33.00 CHF 990.–\nSe vitto e alloggio gratuiti sono concessi non solo al salariato,\
\ ma anche ai \nsuoi familiari, si prendono in considerazione i seguenti supplementi:\n\
• per ogni familiare adulto lo stesso importo del salariato;\n• per ogni familiare\
\ minorenne la metà dell’importo del salariato.\nIl reddito in natura di altra\
\ specie è valutato e determinato dalla cassa di \ncompensazione per ogni singolo\
\ caso. L’esatta valutazione si basa sulle \ncircostanze specifiche e avviene\
\ su base individuale.\n13 Quali salari minimi vengono applicati per i membri\
\ della \nfamiglia occupati nell’azienda agricola?\nPer i membri della famiglia\
\ del titolare dell’azienda agricola che collaborano \ncon lui vengono applicate\
\ le seguenti retribuzioni globali mensili (in denaro \no in natura):\n• 2 070 franchi\
\ per i familiari non coniugati;\n• 3 060 franchi per i familiari coniugati (se\
\ entrambi i coniugi lavorano \na tempo pieno in azienda, si applica l’importo\
\ di 2 070 franchi per \nognuno di essi). Questo punto non concerne il coniuge\
\ del gestore \nstesso;\n• 690 franchi per il mantenimento di ogni figlio minorenne.\f\
11Obbligo contributivo degli aventi diritto a una rendita \ndi vecchiaia dell’AVS\n\
14 Gli aventi diritto a una rendita di vecchiaia dell’AVS \nsono soggetti all’obbligo\
\ contributivo?\nLe persone che pur avendo raggiunto l’età di riferimento esercitano\
\ ancora \nun’attività lucrativa continuano a versare i contributi all’AVS/AI/IPG,\
\ ma non \nall’assicurazione contro la disoccupazione (AD). Esse beneficiano però\
\ di \nuna franchigia.\nRinuncia facoltativa alla franchigia : i salariati hanno\
\ la possibilità di \nrinunciare all’applicazione della franchigia, affinché i\
\ contributi vengano \nconteggiati sull’intero salario. Questo può eventualmente\
\ permettere di \naumentare la propria rendita grazie alla compensazione di lacune\
\ contribu -\ntive e assicurative o all’aumento del reddito annuo medio determinante.\
\ Si \nveda al riguardo gli opuscoli informativo 3.08 – Nuovo calcolo della rendita\
\ \ndi vecchiaia dopo l’età di riferimento e Stabilizzazione dell’AVS (AVS 21)\
\ \nChe cosa cambia? .\nFranchigia in caso di svolgimento di più attività:\nGli\
\ aventi diritto a una rendita di vecchiaia dell’AVS che esercitano con -\ntemporaneamente\
\ un’attività lucrativa indipendente e una salariata hanno \ndiritto all’applicazione\
\ della franchigia per ciascuna di queste attività.\n15 A quanto ammonta la franchigia?\n\
Alle persone che hanno raggiunto l’età di riferimento e continuano a esercitare\
\ \nun’attività lucrativa viene applicata una franchigia di 16 800 franchi all’anno,\
\ \nsulla quale non devono versare contributi. Questi ultimi vengono quindi perce\
\ -\npiti soltanto sulla parte del reddito che eccede 16 800 franchi all’anno.\
\ \nSe la persona in questione lavora contemporaneamente presso diversi da -\n\
tori di lavoro, la franchigia è applicata separatamente a ogni rapporto di \n\
lavoro. Allo stesso modo, i salariati possono decidere se applicare o meno \n\
la franchigia separatamente per ogni rapporto di lavoro. Nell'anno in cui si \n\
raggiunge l'età di riferimento è deducibile soltanto la franchigia calcolata \n\
proporzionalmente sulla parte di salario percepita a partire dal mese in cui \n\
si raggiunge l'età di riferimento.\f1216 I salariati come possono rinunciare all’applicazione\
\ \ndella franchigia? \nI salariati che non applicano la franchigia e desiderano\
\ versare i contributi \nAVS/AI/IPG sull’intero salario devono comunicarlo per\
\ tempo al loro datore \ndi lavoro, vale a dire al più tardi:\n• al pagamento\
\ del primo salario dopo il raggiungimento dell’età di rife -\nrimento, oppure\n\
• per gli anni seguenti, fino al pagamento del primo salario di ciascuno \ndegli\
\ anni civili successivi. \nSe il lavoratore accetta il salario versatogli, che\
\ è già stato ridotto dell’im -\nporto della franchigia, acconsente all’applicazione\
\ di quest’ultima.. \nLa decisione vale per il singolo datore di lavoro e per\
\ l’intero anno civile. Si \nrinnova automaticamente l’anno civile successivo,\
\ se il salariato non comu -\nnica al datore di lavoro una decisione in altro\
\ senso. \n17 Come è calcolata la franchigia se l’attività dura meno \ndi un\
\ anno?\nIl datore di lavoro deduce dal salario annuo la franchigia di 16 800 franchi.\
\ \nQualora la retribuzione non si riferisca o l’attività lucrativa non si estenda\
\ \nall’anno intero, la franchigia viene calcolata proporzionalmente alla fra\
\ -\nzione annua corrispondente, ossia 1 400 franchi per ogni mese civile intero\
\ \no iniziato.\nEsempio: \nse il beneficiario di una rendita di vecchiaia lavora\
\ dal 30 marzo al 6 giugno, \nsi calcolano sia marzo che giugno come mesi interi,\
\ ossia complessivamente \n4 mesi. La franchigia è dunque di 4 × 1 400 franchi,\
\ ossia 5 600 franchi.\f1318 Esempi di calcolo\nEsempio 1 – Attività esercitata\
\ durante tutto l’anno\nUn indipendente continua a gestire la propria azienda\
\ dopo aver compiuto \ni 65 anni. È inoltre membro del consiglio d’amministrazione\
\ di una società \nanonima. Dall’onorario versatogli per la funzione di membro\
\ del consiglio \nd’amministrazione il datore di lavoro deduce la franchigia,\
\ senza che il la -\nvoratore reagisca. Ne risulta pertanto il seguente conteggio:\n\
Utile netto annuo \ndell’aziendaReddito come mem -\nbro del consiglio di \namministrazione\n\
\ CHF 30 500.– CHF 18 000.–\nFranchigia - CHF 16 800.– - CHF 16 800.–\nSalario\
\ soggetto a \ncontribuzione CHF 13 700.– CHF 1 200.–\nEsempio 2 – Attività\
\ svolta per meno di un anno \nUna salariata di 66 anni lavora dal 1° marzo al\
\ 6 aprile presso la ditta C \ne poi dal 23 al 30 aprile presso la ditta D, accettando\
\ la deduzione delle \nfranchigie. Ne risultano i seguenti conteggi salariali:\n\
Ditta C Ditta D\nDal 1° marzo al 6 aprile Dal 23 al 30 aprile\nSalario mensile\
\ per marzo CHF 8 000.– CHF\nSalario mensile per aprile CHF 1 200.– CHF\
\ 2 100.–\nTotale CHF 9 200.– CHF 2 100.–\nFranchigia - CHF 2 800.– - CHF\
\ 1 400.–\nImporto soggetto a \ncontribuzione CHF 6 400.– CHF 700.–\f14Esempio \
\ 3 – Attività esercitata durante tutto l’anno e rinuncia \nall’applicazione della\
\ franchigia\n• Un avente diritto a una rendita di vecchiaia dell’AVS lavora dal\
\ 1° gen -\nnaio 2025 per le ditte A e B. I datori di lavoro applicano la franchigia\
\ \nai salari versati.\n• Nel mese di marzo il salariato comunica alla ditta A\
\ di voler rinunciare \nall’applicazione della franchigia. La comunicazione è\
\ tardiva e quindi \nla ditta A non può tenerne conto per l’anno 2025.\n• Assicura\
\ però al lavoratore che a partire dal 1° gennaio 2026 la fran -\nchigia non verrà\
\ più dedotta. Ne risultano i seguenti conteggi salariali:\nAnno 2025 Ditta A\
\ Ditta B \nSalario annuo CHF 19 200.– CHF 18 000.–\nFranchise - CHF 16 800.–\
\ - CHF 16 800.–\nImporto soggetto a contribuzione CHF 2 400.– CHF 1 200.–\n\
Anno 2026 Ditta A Ditta B \nSalario annuo CHF 21 300.– CHF 18 200.–\nFranchise\
\ - CHF 0.– - CHF 16 800.–\nImporto soggetto a contribuzione CHF 21 300.– \
\ CHF 1 400.–\nContributi sul salario di poco conto\n19 I salari di poco conto\
\ sono soggetti a contribuzione?\nSe il salario determinante non supera i 2 500 franchi\
\ per anno civile e per \nrapporto di lavoro, i contributi sono percepiti soltanto\
\ a richiesta dell’assi -\ncurato.\nPer le persone occupate nelle economie domestiche\
\ private i contributi \nvanno versati in ogni caso, a prescindere dall’ammontare\
\ del reddito (cfr. \nopuscolo informativo 2.06 – Lavoro domestico ). Sono tuttavia\
\ eccettuati \ni giovani fino al 31 dicembre dell’anno successivo al compimento\
\ del \n25° anno d’età, il cui salario non supera i 750 franchi per anno civile\
\ e \nper datore di lavoro. Questi assicurati possono esigere il pagamento dei\
\ \ncontributi.\nLe persone impiegate da produttori di danza e di teatro, dalle\
\ orchestre, \nda produttori nell’ambito fonografico e audiovisivo, dalle radio,\
\ dalle tele -\nvisioni e dalle scuole del settore artistico devono versare i\
\ contributi in ogni \ncaso, a prescindere dall’ammontare del reddito.\f15Contributi\
\ su pagamenti posticipati del salario\n20 Cosa si intende per pagamento posticipato\
\ del salario?\nPer pagamento posticipato del salario si intende il versamento\
\ effettuato \nnon immediatamente alla fine un determinato periodo di paga. Ciò\
\ avviene \nad esempio nel caso di quote di utile, provvigioni, gratifiche, retribuzioni\
\ di \nconsigli d’amministrazione e tantièmes.\n21 Come viene accertato l’obbligo\
\ di contribuzione sui \npagamenti posticipati?\nPer accertare l’obbligo di contribuzione\
\ sui pagamenti posticipati è deter -\nminante il momento in cui è stato prestato\
\ il lavoro e non quello in cui \nviene versato il salario. \nI contributi sono\
\ quindi dovuti sui pagamenti posticipati soltanto se il sa -\nlariato, nel momento\
\ in cui ha prestato il lavoro, era assicurato e soggetto \nall’obbligo di contribuzione.\n\
Esempio: \nun giovane inizia un apprendistato il 1° maggio 2024 e compie 17 anni\
\ il \n1° ottobre 2024. Dal 1° gennaio 2025 è soggetto all’obbligo di contribu\
\ -\nzione AVS. Nel mese di maggio 2025 riceve una gratifica per l’intero primo\
\ \nanno d’apprendistato (da maggio 2024 ad aprile 2025). Poiché è tenuto \na\
\ versare contributi soltanto dal 1° gennaio 2025, solo 1/3 della gratifica \n\
(mesi da gennaio a aprile 2025) è soggetto a contribuzione.\n22 Qual è il momento\
\ determinante per il calcolo dei \ncontributi?\nPer calcolare i contributi sui\
\ pagamenti salariali posticipati è determinante il \nmomento in cui viene versato\
\ il salario e non quello in cui è stato prestato il \nlavoro. Ciò significa che\
\ il calcolo dei contributi viene effettuato secondo i \ntassi, le franchigie\
\ e i limiti massimi vigenti al momento del versamento del \nsalario. È fatto\
\ salvo il punto 23.\nEsempio: \nse una gratifica viene versata nel 2025, si applicano\
\ i tassi contributivi e le \nfranchigie stabiliti per il 2025, anche se il lavoro\
\ è stato svolto nel 2024.\f1623 In quali casi il datore di lavoro deve indicare\
\ \nseparatamente i pagamenti posticipati?\nIl datore di lavoro deve indicare\
\ separatamente nell’attestazione salariale i \npagamenti posticipati se:\n• il\
\ pagamento è stato effettuato a favore di una persona assicurata che \nnell’anno\
\ del versamento non è più alle sue dipendenze;\n• le disposizioni sull’obbligo\
\ contributivo sono state modificate tra il mo -\nmento in cui è stato prestato\
\ il lavoro e quello in cui viene versato il \nsalario.\nIn questi casi il datore\
\ di lavoro deve indicare con esattezza, nella colonna \n«durata di contribuzione»,\
\ a quali mesi il pagamento posticipato si riferisce, \ndistinguendo gli anni\
\ civili. Solo così la cassa di compensazione è in grado \ndi registrare correttamente\
\ il reddito della persona assicurata nel suo conto \nindividuale, evitando in\
\ tal modo qualsiasi pregiudizio nel calcolo della ren -\ndita. Il datore di lavoro\
\ non deve indicare separatamente nel certificato di \nsalario i pagamenti posticipati\
\ non menzionati in questo numero, ma può \nindicarli insieme ai salari versati\
\ per l’anno civile in corso.\nRegolamentazione speciale su richiesta scritta\
\ del salariato: se il richiedente \npuò dimostrare che il reddito registrato\
\ nell’anno del versamento proviene \nda un’attività lucrativa esercitata in un\
\ anno anteriore per il quale i contri -\nbuti versati non hanno raggiunto l’importo\
\ minimo, la cassa di compen -\nsazione resgistra il reddito nell’anno in cui\
\ è stata svolta la relativa attività \nlucrativa. La richiesta può essere presentata\
\ fino all’insorgenza dell’evento \nassicurato.\nIn questi casi i contributi vengono\
\ calcolati secondo i tassi, le franchigie \ne i limiti massimi validi al momento\
\ in cui è stata fornita la prestazione \nlavorativa.\f17Contributi sulle indennità\
\ di perdita di guadagno \n(IPG) e sulle indennità giornaliere dell’AI, dell’AD\
\ \ne dell’assicurazione militare\n24 I datori di lavoro devono pagare contributi\
\ sulle \nindennità di perdita di guadagno e sulle indennità \ngiornaliere?\n\
Sì. Anche le indennità di perdita di guadagno per chi presta servizio e \nin caso\
\ di maternità, di congedo per l’altro genitore,"
- "4.03 Prestations de l’AI\nMoyens auxiliaires de l’AI\nEtat au 1er janvier 2024\f\
2En bref\nLes personnes assurées auprès de l’AI ont droit aux moyens auxiliaires\
\ – \nfigurant sur une liste établie par le Conseil fédéral – qui leur sont néces\
\ -\nsaires pour continuer d’exercer une activité lucrative ou d’accomplir leurs\
\ \ntâches habituelles (par ex. en tant que femme ou homme au foyer), pour \n\
fréquenter une école, apprendre un métier ou à des fins d’accoutumance \nfonctionnelle.\n\
Elles ont aussi droit aux moyens auxiliaires nécessaires au quotidien pour \n\
être aussi indépendantes et autonomes que possible dans leur vie privée, \nque\
\ ce soit pour se déplacer, établir des contacts avec l’entourage ou déve -\n\
lopper leur autonomie personnelle.\nLe présent mémento informe les assurés sur\
\ les types de moyens auxiliaires, \nle droit à ces moyens, les modalités de demande\
\ et la remise de moyens \nauxiliaires par l’AI.\f3Types de moyens auxiliaires\n\
1 Quels moyens auxiliaires existent pour le domaine \nprofessionnel ?\nDes moyens\
\ auxiliaires simples, adéquats et économiques, conçus pour fa -\nciliter l’exercice\
\ d’une activité lucrative ou l’accomplissement des travaux \nhabituels, la fréquentation\
\ d’une école, l’apprentissage d’un métier, ou à \ndes fins d’accoutumance fonctionnelle\
\ :\n• supports plantaires (après une opération du pied, si vous avez droit à\
\ la \nprise en charge de celle-ci par l’AI)\n• lunettes et verres de contact\
\ (après une opération des yeux, si vous \navez droit à la prise en charge de\
\ celle-ci par l’AI)\n• prothèses dentaires (si elles constituent un complément\
\ important de \nmesures médicales de réadaptation)\n• contributions d’amortissement\
\ pour automobiles\n• appareils d’écoute pour supports sonores\n• instruments\
\ de travail et appareils ménagers conçus en fonction de \nl’invalidité ainsi\
\ appareillages permettant la station assise, debout ou \ncouchée et surfaces\
\ de travail adaptés à l’infirmité \n• modifications architecturales effectuées\
\ sur le lieu de travail et à votre \ndomicile pour vous permettre de tenir votre\
\ ménage de façon indé -\npendante\nLes contributions d’amortissement pour automobiles\
\ ne peuvent vous être \nversées que si vous couvrez vos besoins vitaux au sens\
\ de l’AI par votre \nactivité lucrative, c’est-à-dire que votre revenu brut atteint\
\ au moins la \nmoyenne entre le minimum et le maximum de la rente ordinaire simple.\n\
2 Quels moyens auxiliaires existent pour la vie quotidienne ?\nDes moyens auxiliaires\
\ simples, adéquats et économiques conçus pour fa -\nciliter la vie quotidienne,\
\ que ce soit pour se déplacer, établir des contacts \navec l’entourage, développer\
\ l’autonomie personnelle, mais aussi pour fa -\nciliter l’exercice d’une activité\
\ lucrative ou l’accomplissement des travaux \nhabituels, la fréquentation d’une\
\ école, l’apprentissage d’un métier, ou à \ndes fins d’accoutumance fonctionnelle\
\ :\n• prothèses (pour les pieds, les jambes, les mains ou les bras, et exopro\
\ -\nthèses du sein)\n• orthèses (pour les jambes, les bras, le tronc et les cervicales)\f\
4• chaussures orthopédiques sur mesure, chaussures orthopédiques de \nsérie, chaussures\
\ orthopédiques spéciales ainsi que retouches ortho -\npédiques et éléments orthopédiques\
\ incorporés aux chaussures de \nconfection ou aux chaussures orthopédiques spéciales,\
\ surconsomma -\ntion de chaussures de confection en raison de l’invalidité\n\
• moyens auxiliaires pour le crâne et la face (prothèses oculaires, épithèses\
\ \nfaciales, appareils auditifs, perruques, appareils orthophoniques après \n\
opération du larynx)\n• fauteuils roulants sans moteur, fauteuils roulants électriques,\
\ monte-es -\ncaliers et plateformes élévatrices ainsi que transformations de\
\ véhicules \nà moteur nécessitées par l’invalidité\n• moyens auxiliaires pour\
\ aveugles et personnes gravement handicapées \nde la vue (cannes blanches, systèmes\
\ de navigation pour piétons, en -\ntraînement à l’emploi de smartphones et tablettes,\
\ chiens-guides pour \naveugles, appareils d’écoute pour supports sonores, lunettes-loupes,\
\ \nsystèmes de lecture et d’écriture, OrCam MyEye)\n• cannes-béquilles\n• accessoires\
\ facilitant la marche (déambulateurs et supports ambulatoires)\n• moyens auxiliaires\
\ servant à développer l’autonomie personnelle (WC-\ndouches et WC-séchoirs ainsi\
\ que compléments aux installations sani -\ntaires, élévateurs pour malades, lits\
\ électriques) et modifications archi -\ntecturales du domicile nécessitées par\
\ l’invalidité\n• plates-formes élévatrices, monte-rampes d’escalier et rampes\
\ ainsi que \nsuppression ou modification d’obstacles architecturaux à l’intérieur\
\ et \naux abords des lieux d’habitation, de travail, de formation et de sco -\n\
larisation\n• chiens d’assistance\n• moyens auxiliaires permettant d’établir des\
\ contacts avec l’entourage \n(appareils de communication électriques et électroniques,\
\ tourneurs \nde pages, appareils de contrôle de l’environnement, vidéophones\
\ SIP)\n• contributions aux vêtements confectionnés sur mesure pour les per -\n\
sonnes atteintes de troubles de la croissance ou des déformations du \nsquelette\n\
• casques de protection \n• coudières et genouillères de protection pour hémophiles\n\
• sièges de voiture spéciaux pour les enfants qui ne peuvent pas contrô -\nler\
\ la tête et le tronc\f5Appareils de traitement\n3 Dans quelles circonstances\
\ ai-je droit à des appareils de \ntraitement ?\nSi vous avez moins de 20 ans\
\ et que vous bénéficiez de prestations de l’AI \npour le traitement d’infirmités\
\ congénitales reconnues, vous avez droit à \ndes appareils de traitement pour\
\ différentes infirmités, à certaines condi -\ntions. \nSi vous avez moins de\
\ 20 ans, vous avez aussi droit à ces appareils si leur \nutilisation s’avère\
\ nécessaire dans le cadre d’une mesure médicale octroyée \npar l’AI.\n4 Quels\
\ appareils de traitement sont pris en charge \npar l’AI ?\nA titre d’exemples,\
\ sont pris en charge par l’AI :\n• les inhalateurs\n• les lunettes correctrices\
\ en cas d’infirmité congénitale de l’œil\n• les nébuliseurs\n• les appareils\
\ de distillation et les coussins en mousse en cas de \nmucoviscidose (fibrose\
\ kystique)\n• les ballons et les tapis thérapeutiques en cas de paralysie cérébrale\n\
Demande de prestations \n5 Comment dois-je procéder pour demander des moyens \n\
auxiliaires ?\nSi vous entendez faire valoir pour la première fois votre droit\
\ à des moyens \nauxiliaires de l’AI, vous devez adresser votre demande à l’office\
\ AI de votre \ncanton de domicile. Celui-ci examinera si vous y avez droit en\
\ vertu des \ndispositions légales.\nVous obtiendrez le formulaire 001.002 - Demande\
\ de prestations AI pour \nadultes : Moyens auxiliaires auprès des offices AI\
\ ainsi que des caisses de \ncompensation et de leurs agences ; vous pouvez aussi\
\ le télécharger sur le \nsite www.avs-ai.ch . \f6Forme de remise\n6 Sous quelle\
\ forme les moyens auxiliaires sont-ils remis ?\nDans la mesure du possible, ce\
\ sont les dépôts de l’AI qui remettent les \nmoyens auxiliaires. Sinon, l’AI\
\ peut autoriser l’achat d’un nouveau moyen \nauxiliaire.\nLes moyens auxiliaires\
\ onéreux sont, en règle générale, remis en prêt. \nC’est exclusivement dans des\
\ cas spéciaux que l’AI verse des contributions \nuniques ou périodiques pour\
\ des moyens auxiliaires que vous aurez acquis \nou loués vous-même. Certains\
\ moyens auxiliaires sont remboursés par un \nforfait, qui est versé quel que\
\ soit le coût réel du moyen auxiliaire. Des \ncontributions financières plus\
\ élevées peuvent, dans des cas de rigueur, \nêtre versées pour l’acquisition\
\ d’appareils auditifs.\nVous trouverez de plus amples informations dans le mémento\
\ \n4.08 - Appareils auditifs de l’AI .\nUtilisation soigneuse\n7 A quoi dois-je\
\ veiller ?\nLes moyens auxiliaires remis par l’AI doivent être utilisés avec\
\ soin et confor -\nmément à leur but. Si vous ne respectez pas cette obligation\
\ de diligence, \nvous devrez verser une indemnité appropriée.\nRemise ou prise\
\ en charge par Pro Infirmis\n8 Quand puis-je faire appel à Pro Infirmis ?\nSi\
\ vous n’avez pas droit aux moyens auxiliaires de l’AI, vous pouvez vous \nadresser\
\ à Pro Infirmis. Cette organisation peut vous en prêter ou vous ac -\ncorder\
\ une contribution à leur acquisition. Il n’existe cependant aucun droit \nlégal\
\ à ces prestations.\f7Moyens auxiliaires dans le cadre des prestations \ncomplémentaires\n\
9 Qu’en est-il si je perçois des prestations \ncomplémentaires ?\nSi vous percevez\
\ des prestations complémentaires, vous avez droit, dans \nle cadre des dispositions\
\ légales, à certains moyens et appareils auxiliaires \n(appareils de soins et\
\ de traitement). Mais seules sont prises en charge les \nprestations qui ne sont\
\ pas déjà couvertes par l’AI ou par une autre assu -\nrance.\nGarantie des droits\
\ acquis pour les bénéficiaires \nd’une rente de vieillesse\n10 Qu’en est-il si\
\ je suis à la retraite ?\nSi des moyens auxiliaires ou des prestations de remplacement\
\ vous ont \nété accordés par l’AI, vous continuez d’y avoir droit si vous anticipez\
\ votre \nrente AVS ou que vous avez atteint l’âge de référence de l’AVS, tant\
\ que les \nconditions de l’AI sont remplies.\nMoyens auxiliaires de l’AVS\n11\
\ Dans quelles circonstances ai-je droit à des moyens \nauxiliaires de l’AVS\
\ ?\nLes moyens auxiliaires sont aussi prévus par l’AVS : si vous n’avez pas béné\
\ -\nficié de moyens auxiliaires de l’AI jusqu’ici et que vous anticipez la totalité\
\ \nde votre rente AVS ou que vous avez atteint l’âge de référence de l’AVS, \n\
vous pouvez faire valoir le droit à une contribution financière pour certains\
\ \nmoyens auxiliaires de l’AVS.\nVous trouverez de plus amples informations à\
\ ce sujet dans le mémento \n3.02 - Moyens auxiliaires de l’AVS .\f8Renseignements\
\ et autres \ninformations\nCe mémento ne fournit qu’un aperçu général. Pour\
\ le règlement des \ncas individuels, seules les dispositions légales font foi.\
\ Les offices AI, \nles caisses de compensation et leurs agences fournissent volontiers\
\ \nles renseignements souhaités. Vous trouverez la liste complète de vos \ninterlocuteurs\
\ sur le site www.avs-ai.ch .\nPublié par le Centre d’information AVS/AI en collaboration\
\ avec l’Of -\nfice fédéral des assurances sociales.\nRéimpression novembre 2024.\
\ Toute reproduction, même partielle, \nn’est autorisée qu’avec l’accord écrit\
\ du Centre d’information AVS/AI. \nCe mémento peut être obtenu auprès des caisses\
\ de compensation et \nde leurs agences ainsi qu’auprès des offices AI. Numéro\
\ de commande \n4.03/f. Il est également disponible sous www.avs-ai.ch .\n Plus\
\ d’informations, de publications et de vidéos explicatives.\n4.03-24/01-F"
- "5.03 Prestazioni transitorie\nPrestazioni transitorie per \ni disoccupati anziani\n\
Stato al 1° gennaio 2025 \f2In breve \nLe prestazioni transitorie sono tese a\
\ garantire la copertura del fabbisogno \nvitale delle persone che hanno perso\
\ il lavoro poco prima di raggiungere \nl’età di riferimento, fino al momento\
\ in cui possono riscuotere la rendita \ndi vecchiaia. Si tratta di prestazioni\
\ in funzione del bisogno, che vengono \ncalcolate analogamente alle prestazioni\
\ complementari all’assicurazione \nper la vecchiaia e per i superstiti (AVS)\
\ o all’assicurazione invalidità (AI). \nI disoccupati che hanno esaurito il diritto\
\ all’indennità dell’assicurazione \ncontro la disoccupazione dopo i 60 anni e\
\ non riescono a conseguire un \nreddito sufficiente possono ricevere prestazioni\
\ transitorie fino al pensio -\nnamento. Queste prestazioni, finanziate dalla\
\ Confederazione e versate dai \nCantoni, constano della prestazione transitoria\
\ annua, che viene versata \nmensilmente (v. punti 3–10), e del rimborso delle\
\ spese di malattia e d’in -\nvalidità (v. punti 11 e 12).\nPrestazioni transitorie\n\
1 Quando si può avere diritto alle prestazioni transitorie?\nPossono ricevere\
\ le prestazioni transitorie le persone che: \n• esauriscono il diritto all’indennità\
\ di disoccupazione nel mese in cui \ncompiono i 60 anni di età o successivamente;\n\
• sono stati assicurati all’AVS svizzera per almeno 20 anni, di cui almeno \n\
cinque dopo aver compiuto i 50 anni di età, e hanno conseguito un \ndeterminato\
\ reddito da attività lucrativa1;\n• dispongono di una sostanza non superiore\
\ a 50 000 franchi (persone \nsole) o 100 000 franchi (coppie sposate), senza\
\ tener conto delle abita -\nzioni ad uso proprio;\n• sono domiciliate e dimorano\
\ abitualmente in Svizzera o in uno Stato \ndell’UE2 o dell’AELS3; \n• hanno\
\ spese riconosciute superiori ai redditi computabili (condizione \neconomica).\n\
\ \n1 Una persona deve aver guadagnato 22 680 franchi all’anno (75 % di 30 240\
\ franchi) per \npoter esercitare il diritto alle prestazioni transitorie (importi\
\ per il 2025).\n2 Austria, Belgio, Bulgaria, Cechia, Cipro, Croazia, Danimarca,\
\ Estonia, Finlandia, Francia, \nGermania, Grecia, Irlanda, Italia, Lettonia,\
\ Lituania, Lussemburgo, Malta, Paesi Bassi, Polo -\nnia, Portogallo, Romania,\
\ Slovacchia, Slovenia, Spagna, Svezia e Ungheria, purché lo Stato in \nquestione\
\ sia assoggettato al regolamento (CEE) n. 883/2004.\n3 Norvegia, Islanda e Liechtenstein.\f\
3Non ricevono le prestazioni transitorie le persone che:\n• hanno diritto a una\
\ rendita dell’AVS o dell’AI; \n• hanno esaurito il diritto all’indennità di\
\ disoccupazione prima del com -\npimento dei 60 anni;\n• hanno esaurito il diritto\
\ all’indennità di disoccupazione prima del \n1° luglio 2021.\n2 A quanto ammontano\
\ le prestazioni transitorie?\nLe prestazioni transitorie constano della prestazione\
\ transitoria annua e del \nrimborso delle spese di malattia e d’invalidità. \
\ \nSono fissate in funzione del bisogno e versate fino a un importo massimo \n\
annuo di 46 508 franchi per le persone sole e di 69 761 franchi per le cop -\n\
pie sposate (limite massimo delle prestazioni transitorie).\nPer le spese di malattia\
\ e d’invalidità sono rimborsati annualmente al mas -\nsimo 5 000 franchi per\
\ le persone sole e 10 000 franchi per le coppie \nsposate, fino al raggiungimento\
\ dell’importo massimo delle prestazioni \ntransitorie.\nPrestazione transitoria\
\ annua \n3 Come viene calcolata la prestazione transitoria annua? \nLa prestazione\
\ transitoria annua corrisponde alla differenza tra le spese \nriconosciute e\
\ i redditi computabili. Le prestazioni transitorie sono soggette \na limitazione\
\ e vengono concesse soltanto fino agli importi massimi indicati \nal punto 2\
\ (46 508 franchi o 69 761 franchi).\n4 Quali spese sono riconosciute? \nSono\
\ riconosciute soltanto le spese indicate nella legge. Se i beneficiari di \n\
prestazioni transitorie sono domiciliati in uno Stato dell’UE o dell’AELS, gli\
\ \nimporti di determinate spese vengono adeguati al potere d’acquisto dello \n\
Stato in questione. Sono riconosciute le spese seguenti.\na) Importo destinato\
\ alla copertura del fabbisogno generale vitale \n(importo annuo)\nQuesto importo\
\ serve a coprire le spese per le necessità quotidiane, quali \ngeneri alimentari,\
\ vestiti, imposte ecc.\nper persone sole CHF 20 670.–\nper coppie di coniugi\
\ CHF 31 005.–\f40 - 10 anni 11 - max. 25 anni\nper il primo figlio CHF 7 590.–\
\ CHF 10 815.–\nper il secondo figlio CHF 6 325.– CHF 10 815.–\nper il terzo figlio\
\ CHF 5 270.– CHF 7 210.–\nper il quarto figlio CHF 4 390.– CHF 7 210.–\nper ogni\
\ ulteriore figlio CHF 3 660.– CHF 3 605.–\nb) Spese per l’alloggio\nLe spese\
\ per la pigione e le spese accessorie sono riconosciute fino al rag -\ngiungimento\
\ degli importi massimi per la pigione indicati di seguito. Per \nle persone che\
\ vivono in un’abitazione di loro proprietà viene computato \nquale pigione il\
\ valore locativo, cui viene aggiunto un importo forfettario \ndi 3 480 franchi\
\ per le spese accessorie. Possono essere computati al mas -\nsimo i seguenti\
\ importi annui:\nRegione per \nla pigione1 1 \n(grandi centri)Regione per \n\
la pigione1 2 \n(città)Regione per \nla pigione1 3 \n(campagna)\nPersone sole\
\ CHF 18 900.– CHF 18 300.– CHF 16 680.–\nCoppie sposate senza \nfigli / Persone\
\ sole con \nun figlioCHF 22 320.– CHF 21 720.– CHF 20 160.–\nCoppie sposate\
\ con un \nfiglio / Persone sole con \ndue figliCHF 24 780.– CHF 23 760.– CHF\
\ 22 200.–\nCoppie sposate con due \no più figli / Persone sole \ncon tre o più\
\ figliCHF 27 060.– CHF 25 920.– CHF 24 000.–\nCoppie di conviventi \n(economia\
\ domestica \ncomposta da due per -\nsone) per persona2CHF 11 160.– CHF 10 860.–\
\ CHF 10 080.–\n1 Per la ripartizione dei Comuni nelle tre regioni si veda il\
\ sito Internet \ndell’UFAS, www.ufas.admin.ch > Assicurazioni sociali > Prestazioni\
\ transitorie > Informa -\nzioni di base & legislazione\n2 Alle persone non sposate\
\ che vivono in un’economia domestica composta da più di due \npersone si applicano\
\ altri importi.\nSe è necessaria un’abitazione in cui è possibile spostarsi con\
\ una carroz -\nzella, l’importo massimo delle spese di pigione aumenta di 6 900\
\ franchi.\f5c) Altre spese riconosciute\nSono inoltre riconosciute le spese seguenti:\
\ \n• le spese di manutenzione di fabbricati e gli interessi ipotecari, fino\
\ a \nconcorrenza del ricavo lordo dell’immobile; \n• l’importo per l’assicurazione\
\ malattie obbligatoria, che corrisponde al \npremio effettivo, ma al massimo\
\ al premio medio cantonale o regio -\nnale; \n• i contributi ad AVS, AI e IPG;\
\ \n• le spese professionali, fino a concorrenza del reddito lordo da attività\
\ \nlucrativa;\n• i contributi di mantenimento versati in virtù del diritto di\
\ famiglia, come \nad esempio gli alimenti;\n• i contributi per il mantenimento\
\ facoltativo della previdenza professio -\nnale.\n5 Quali redditi sono computati?\
\ \nSono computati come reddito: \n• i redditi da attività lucrativa (v. anche\
\ punto 7);\n• tutte le rendite correnti (previdenza professionale, assicurazione\
\ mili -\ntare o contro gli infortuni, assicurazioni sociali estere ecc.), pensioni\
\ e \naltre prestazioni periodiche;\n• i redditi sostitutivi quali indennità\
\ giornaliere di assicurazioni sociali e di \nassicurazioni private;\n• gli assegni\
\ familiari;\n• i proventi della sostanza mobile e immobile, quali interessi,\
\ pigioni, \nsubaffiti, affitto o usufrutto;\n• il valore locativo dell’abitazione;\n\
• le prestazioni derivanti da un contratto di vitalizio o da una conven -\nzione\
\ analoga;\n• i proventi e le parti di sostanza cui si è rinunciato;\n• i contributi\
\ di mantenimento ricevuti in virtù del diritto di famiglia, \ncome ad esempio\
\ gli alimenti;\n• una parte della sostanza (consumo della sostanza) eccedente\
\ i 30 000 \nfranchi (persone sole) o i 50 000 franchi (coppie sposate). Inoltre,\
\ \nper le abitazioni ad uso proprio è considerata quale sostanza sol -\ntanto\
\ la parte eccedente 112 500 franchi. La parte eccedente le \nfranchigie è computata\
\ quale reddito nella misura di 1/15. \f6Esempio per una persona sola:\nSostanza\
\ (banca) CHF 45 000.–\nSostanza non computabile - CHF 30 000.–\nSostanza computabile\
\ CHF 15 000.–\ndi cui 1/15 (consumo della sostanza) CHF 1 000.–\n6 Quali entrate\
\ non sono computate come reddito?\nNon sono computati come reddito:\n• le prestazioni\
\ dei parenti;\n• le prestazioni dell’aiuto pubblico sociale;\n• gli assegni\
\ per grandi invalidi delle assicurazioni sociali;\n• le borse di studio e altri\
\ aiuti all’istruzione per i figli di età inferiore a 25 \nanni ancora in formazione;\n\
• i contributi di solidarietà per le vittime di misure coercitive a scopo as\
\ -\nsistenziale e collocamenti extrafamiliari.\n7 Come viene computato il reddito\
\ da attività lucrativa? \nDal reddito da attività lucrativa sono dedotti le spese\
\ professionali e i con -\ntributi alle assicurazioni sociali nonché una franchigia\
\ annua di 1 300 fran -\nchi per le persone sole e di 1 950 franchi per le coppie\
\ sposate1. L’importo \nresiduo è computato per due terzi. \n1 La stessa franchigia\
\ viene dedotta anche per le persone con figli minorenni o di età infe -\nriore\
\ a 25 anni ancora in formazione.\n8 Cosa succede in caso di variazione del reddito\
\ o della \nsostanza?\nSe il reddito o la sostanza di un beneficiario di prestazioni\
\ transitorie o di \nuna persona compresa nel calcolo delle medesime si riduce\
\ o aumenta \nnotevolmente, le prestazioni transitorie vengono adeguate anche\
\ nel corso \ndell’anno civile (v. punto 15).\n9 Come incide il reddito da attività\
\ lucrativa del coniuge?\nAl coniuge viene computato un reddito da attività lucrativa\
\ ipotetico, se \nnon ha alcun diritto proprio alle prestazioni transitorie e\
\ rinuncia a conse -\nguire un reddito da attività lucrativa. Se mediante candidature\
\ scritte per \nun posto di lavoro e risposte negative delle imprese in questione\
\ il coniuge \npuò dimostrare di non trovare un impiego ragionevolmente esigibile,\
\ non \nviene computato alcun reddito ipotetico.\f7Il reddito da attività lucrativa\
\ del coniuge senza diritto alle prestazioni tran -\nsitorie viene computato,\
\ senza deduzione di una franchigia, in ragione \ndell’80 per cento. \nMantenimento\
\ del contatto con il mercato del lavoro\n10 Quali sforzi di reintegrazione sono\
\ riconosciuti?\nI beneficiari di prestazioni transitorie devono proseguire i\
\ loro sforzi per \nreintegrarsi nel mercato del lavoro. Sono riconosciuti ad\
\ esempio i seguenti \nsforzi e impegni:\n• collocamento volontario tramite l’ufficio\
\ regionale di collocamento \n(URC);\n• lettere di candidatura;\n• partecipazione\
\ a misure di reintegrazione;\n• volontariato; \n• partecipazione a corsi di\
\ lingue; \n• coaching; \n• cura e assistenza a familiari o conoscenti. \n \n\
Rimborso delle spese di malattia e d’invalidità\n11 Quali spese di malattia e\
\ d’invalidità vengono \nrimborsate?\nOltre alla prestazione transitoria annua\
\ possono essere rimborsate le spese \ndi malattia e d’invalidità sostenute, a\
\ condizioni che queste non siano già \ncoperte da un’altra assicurazione (p.\
\ es. assicurazione malattie, contro gli \ninfortuni o AI), che non siano ancora\
\ raggiunti gli importi massimi (limiti \nmassimi; v. punto 2) e che la persona\
\ interessata viva in Svizzera. Sono \nrimborsate le spese seguenti: \n• le spese\
\ per cure dentarie (economiche e appropriate);\n• le spese supplementari causate\
\ da un regime dietetico d’importanza \nvitale;\n• le spese di trasporto al più\
\ vicino luogo di cura;\n• le spese per i mezzi ausiliari;\n• la partecipazione\
\ ai costi della cassa malati (aliquota percentuale e \nfranchigia) fino a concorrenza\
\ di 1 000 franchi all’anno.\f812 Fino a quando si può richiedere il rimborso\
\ delle spese?\nTutti i documenti necessari, quali conteggi della cassa malati,\
\ fatture del \ndentista o prescrizioni mediche, vanno inviate all’organo competente.\
\ Il \nrimborso delle spese può essere richiesto entro 15 mesi dalla fatturazione.\n\
\ \nRichiesta delle prestazioni transitorie e durata \ndel diritto \n13 Dove\
\ va presentata la richiesta di prestazioni \ntransitorie? \nSi può esercitare\
\ il proprio diritto alle prestazioni transitorie con una richie -\nsta all’organo\
\ esecutivo competente del luogo di domicilio (v. punto 16). \nPer le persone\
\ domiciliate in uno Stato dell’UE o dell’AELS è competente \nl’organo esecutivo\
\ dell’ultimo luogo di domicilio in Svizzera. Per le persone \nche non sono mai\
\ state domiciliate in Svizzera è competente l’organo ese -\ncutivo del luogo\
\ della sede dell’ultimo datore di lavoro. \nCi si può rivolgere agli organi esecutivi\
\ anche per ricevere i moduli uffi -\nciali per la richiesta, che possono essere\
\ inoltrati dalla persona richiedente, \ndal suo rappresentante legale o da un\
\ parente stretto. L’organo esecutivo \ncompetente comunica per iscritto la decisione\
\ concernente le prestazioni \ntransitorie. La persona interessata può fare opposizione.\
\ \n14 Quando inizia e quando finisce il diritto alle prestazioni \ntransitorie?\
\ \nIl diritto alle prestazioni transitorie nasce in linea di principio dal mese\
\ in cui \nviene presentata la richiesta e sono adempiute le condizioni per il\
\ versa -\nmento. Il diritto si estingue alla fine del mese in cui il richiedente:\
\ \n• non adempie più ad almeno una delle condizioni;\n• ha il diritto di riscuotere\
\ anticipatamente la rendita AVS (62 anni per le \ndonne e 63 per gli uomini),\
\ se dagli accertamenti dell’organo esecutivo \ncompetente risulta prevedibile\
\ che al raggiungimento dell’età ordinaria \ndi pensionamento avrà diritto alle\
\ prestazioni complementari; o\n• raggiunge l’età di riferimento.\f9Obbligo d’informare\
\ \n15 Si è tenuti a comunicare eventuali modifiche delle \ncondizioni personali\
\ o economiche? \nOgni cambiamento delle condizioni personali e ogni variazione\
\ importante \ndelle condizioni economiche dell’avente diritto a prestazioni transitorie\
\ e \ndelle persone comprese nel calcolo delle medesime devono essere comu -\n\
nicati immediatamente all’organo esecutivo. La comunicazione può essere \neffettuata\
\ dall’avente diritto, dal suo rappresentante legale, da un terzo o \nda un’autorità.\
\ Tra queste modifiche rientrano ad esempio: \n• cambiamenti d’indirizzo; \n\
• cambiamenti della pigione (oppure del numero di persone che vivono \nnella\
\ stessa abitazione); \n• inizio o fine di un impiego; \n• cambiamenti delle\
\ prestazioni di un datore di lavoro, di un’assicura -\nzione sociale, di una\
\ cassa pensioni ecc.;\n• eredità o donazioni; \n• cessioni di beni; \n• vendite\
\ di beni immobili;\n• inizio di prestazioni regolari da parte di una cassa malati.\
\ \nChi non comunica tali cambiamenti o fornisce dati non corrispondenti al \n\
vero nella richiesta di prestazioni transitorie deve restituire le prestazioni\
\ \nriscosse indebitamente.\f10Maggiori informazioni \n16 Dove si possono ottenere\
\ maggiori informazioni? \nPer maggiori informazioni ci si può rivolgere ai competenti\
\ organi esecutivi, \nche di regola sono ubicati presso la cassa di compensazione\
\ del Cantone di \ndomicilio: www.avs-ai.ch . \nFanno eccezione i Cantoni seguenti:\n\
Cantone Organo competente\nBS Amt für Sozialbeiträge Basel-Stadt, \nGrenzacherstrasse\
\ 62, Postfach, 4005 Basel\nGE Service des prestations complémentaires (SPC),\
\ \nroute de Chêne 54, case postale 6375, 1211 Genève 6\nVD Centre régional de\
\ décision (CRD) de Lausanne, \nplace Chauderon 7, case postale 5032, 1001 Lausanne\n\
ZH Ufficio comunale \nPer la città di Zurigo: \nAmt für Zusatzleistungen zur\
\ AHV/IV der Stadt Zürich, \nAmtshaus Werdplatz, Strassburgstrasse 9, 8036 Zürich\
\ \nPer la città di Winterthur: \nZusatzleistungen zur AHV/IV der Stadt Winterthur,\
\ \nPionierstrasse 5, 8403 Winterthur\f11Esempio di calcolo della prestazione\
\ transitoria annua\nPersona sola\nSpese \nFabbisogno vitale \nPigione lorda\
\ \nPremi cassa malati1 \nTotale \nCHF \nCHF \nCHF \nCHF \n20 670.– \n11 760.–\
\ \n5 544.– \n37 974.–\nRedditi \nReddito da attività lucrativa \nProventi\
\ della sostanza \nComputo della sostanza (1/15) \nTotale \nCHF \nCHF \nCHF\
\ \nCHF \n12 000.– \n105.– \n1 000.– \n13 105.–\nPrestazione transitoria \n\
Uscite \ndedotte le entrate \nPrestazione transitoria annua \nPrestazione transitoria\
\ mensile \nCHF \n- CHF \nCHF \nCHF \n37 974.– \n13 105.– \n24 869.– \n2 073.–\n\
1 Importi differenti a seconda del Cantone. \f12Chiarimenti e altre \ninformazioni\n\
Questo opuscolo informativo presenta solo una panoramica riassun -\ntiva. Per\
\ la valutazione dei singoli casi fanno stato esclusivamente le \ndisposizioni\
\ legali in vigore. Per ulteriori informazioni ci si può rivolgere \nalle casse\
\ di compensazione o alle loro agenzie. L’elenco delle casse di \ncompensazione\
\ è pubblicato all’indirizzo Internet www.avs-ai.ch .\nI termini relativi allo\
\ stato civile hanno anche il significato seguente: \n• matrimonio: unione domestica\
\ registrata;\n• divorzio: scioglimento giudiziale dell’unione domestica registrata;\n\
• decesso del coniuge: decesso del partner registrato.\nPubblicato dal Centro\
\ d’informazione AVS/AI in collaborazione con \nl’Ufficio federale delle assicurazioni\
\ sociali.\nEdizione ottobre 2024. La riproduzione, anche solo parziale, è \n\
autorizzata soltanto con il consenso scritto del Centro d’informazione \nAVS/AI.\
\ \nQuesto opuscolo informativo può essere richiesto alle casse di com -\npensazione,\
\ alle loro agenzie e agli uffici AI. Numero di ordinazione \n5.03/i. È disponibile\
\ anche su www.avs-ai.ch.\n5.03-25/01-I"
- source_sentence: Wo kann ich das Anmeldeformular für eine Hörgeräteversorgung einreichen?
sentences:
- "1.03 Allgemeines\nBetreuungsgutschriften\nStand am 1. Januar 2021\f2Auf einen\
\ Blick\nDie gesetzlichen Bestimmungen sehen vor, dass bei der Rentenberechnung\
\ \nauch Betreuungsgutschriften angerechnet werden können.\nDiese Gutschriften\
\ sind Zuschläge zum rentenbildenden Erwerbseinkom -\nmen. Sie sollen Ihnen ermöglichen,\
\ eine höhere Rente zu erreichen, wenn \nSie pflegebedürftige Verwandte betreuen.\
\ Betreuungsgutschriften sind \nkeine direkten Geldleistungen.\nBetreuungsgutschriften\
\ können Ihnen frühestens ab dem Kalenderjahr \nnach dem 17. Geburtstag bis längstens\
\ zum 31. Dezember des Kalender -\njahres, welches dem Erreichen des Referenzalters\
\ vorangeht, angerechnet \nwerden.\nAnspruch auf Betreuungsgutschriften\n1 Wann\
\ habe ich Anspruch auf Betreuungsgutschriften?\nWenn Sie pflegebedürftige Verwandte\
\ betreuen, die leicht erreichbar sind, \nhaben Sie Anspruch auf Betreuungsgutschriften.\
\ Als Verwandte gelten: \nEhegattin/Ehegatte, Kinder, Eltern, Geschwister, Grosseltern,\
\ Urgrossel -\ntern, Enkel, Schwiegereltern, Stiefkinder sowie der oder die Lebenspartner/\n\
in, der oder die mit der versicherten Person seit mindestens fünf Jahren \nununterbrochen\
\ im gleichen Haushalt lebt. \nDie Verwandten müssen pflegebedürftig sein. Dies\
\ ist dann der Fall, wenn \nsie von der AHV, der IV, der Unfall- oder der Militärversicherung\
\ eine Hilflo -\nsenentschädigung beziehen. Der Hilflosenentschädigung gleichgestellt\
\ ist \ndie Hilflosenentschädigung an pflegebedürftige Minderjährige. \nSie haben\
\ Anspruch auf Betreuungsgutschriften, wenn Sie und die pflege -\nbedürftige Person\
\ sich überwiegend, d. h. während mindestens 180 Tagen \nim Jahr, in derselben,\
\ leicht erreichbaren Wohnsituation befinden. Sie er -\nfüllen diese Voraussetzung,\
\ wenn Sie nicht mehr als 30 Kilometer entfernt \nvom Wohnort der pflegebedürftigen\
\ Person wohnen oder nicht länger als \neine Stunde benötigen, um bei der pflegebedürftigen\
\ Person zu sein. Bei \nLebenspartnern muss die versicherte Person seit mindestsens\
\ fünf Jahren \nununterbrochen im gleichen Haushalt leben.\n2 Kann ich gleichzeitig\
\ Betreuungs- und Erziehungs- \ngutschriften beanspruchen?\nNein. Sie können\
\ Betreuungs- und Erziehungsgutschriften nicht gleich- \nzeitig beanspruchen.\
\ Es ist aber möglich, dass für ein pflegebedürftiges \f3Kind bis zum 16. Geburtstag\
\ Erziehungs- und anschliessend Betreuungs- \ngutschriften gewährt werden. \n\
Anspruch mehrerer berechtigter Personen\n3 Wird die Betreuungsgutschrift unter\
\ verheirateten \nPersonen aufgeteilt?\nJa. Bei verheirateten Personen wird die\
\ Betreuungsgutschrift während \nder Ehejahre aufgeteilt und je zur Hälfte den\
\ Ehegatten angerechnet. Die \nAHV nimmt diese Aufteilung aber nur vor, wenn beide\
\ Ehegatten bei der \nAHV/IV versichert sind. Betreut etwa die Ehefrau ihre pflegebedürftigen\
\ \nEltern in der Schweiz und arbeitet der Mann als Grenzgänger im Ausland, \n\
wird die Gutschrift nicht geteilt. In diesem Fall steht der Ehefrau die ganze\
\ \nBetreuungsgutschrift zu.\n4 Wird die Betreuungsgutschrift bei mehreren \n\
Betreuungspersonen aufgeteilt?\nJa. Beteiligen sich mehrere Personen an der Betreuung,\
\ wird die Betreu -\nungsgutschrift unter ihnen aufgeteilt. Kümmern sich beispielsweise\
\ ein \nEhepaar sowie die ledige Schwester der Ehefrau gemeinsam um die leicht\
\ \nerreichbare, pflegebedürftige Mutter der beiden Frauen, erhalten alle drei\
\ \nPersonen je einen Drittel der Betreuungsgutschrift.\nWirkung der Betreuungsgutschrift\n\
5 Wo wird die Betreuungsgutschrift angerechnet?\nDie Jahre, für die Ihnen eine\
\ Betreuungsgutschrift angerechnet werden \nkann, werden im Individuellen Konto\
\ eingetragen. Der genaue Betrag wird \nerst zum Zeitpunkt der Rentenberechnung\
\ festgesetzt.\n6 Wie hoch ist die Betreuungsgutschrift?\nDie Betreuungsgutschrift\
\ entspricht der dreifachen jährlichen Minimalrente \nzum Zeitpunkt des Rentenanspruchs.\
\ Die Summe der Betreuungsgutschrif -\nten wird durch die Beitragsdauer geteilt\
\ und dann zum durchschnittlichen \nErwerbseinkommen dazugezählt.\nPro Kalenderjahr\
\ darf höchstens eine ganze Gutschrift angerechnet wer -\nden. Die Betreuungsgutschrift\
\ ist nur bis zum Erreichen der Maximalrente \nrentenwirksam. \f4Auskünfte und\
\ weitere \nInformationenJährliche Anmeldung\n7 Wo kann ich die Betreuungsgutschrift\
\ geltend machen?\nSie müssen die Betreuungsgutschrift jährlich bei der kantonalen\
\ Aus -\ngleichskasse im jeweiligen Wohnsitzkanton geltend machen. Die jährliche\
\ \nAnmeldung ist deshalb wichtig, weil es nicht möglich ist, erst bei Erreichen\
\ \ndes Referenzalters zu prüfen, ob die Voraussetzungen für eine Betreuungs -\n\
gutschrift erfüllt waren. \nSie können die Formulare für die Anmeldung bei den\
\ Ausgleichskassen \nund ihren Zweigstellen oder unter www.ahv-iv.ch beziehen.\n\
Dieses Merkblatt vermittelt nur eine Übersicht. Für die Beurteilung \nvon Einzelfällen\
\ sind ausschliesslich die gesetzlichen Bestimmungen \nmassgebend. Die Ausgleichskassen\
\ und ihre Zweigstellen geben gerne \nAuskunft. Ein Verzeichnis aller Ausgleichskassen\
\ finden Sie unter \nwww.ahv-iv.ch .\nDie Zivilstandsbezeichnungen haben auch\
\ die folgende Bedeutung: \n• Ehe/Heirat: eingetragene Partnerschaft\n• Scheidung:\
\ gerichtliche Auflösung der Partnerschaft\n• Verwitwung: Tod des eingetragenen\
\ Partners / der eingetragenen \nPartnerin\nHerausgegeben von der Informationsstelle\
\ AHV/IV in Zusammenarbeit \nmit dem Bundesamt für Sozialversicherungen.\nNachdruck\
\ November 2024. Auch auszugsweiser Abdruck ist nur mit \nschriftlicher Einwilligung\
\ der Informationsstelle AHV/IV erlaubt. \nDieses Merkblatt kann bei den Ausgleichskassen\
\ und deren Zweig- \nstellen sowie den IV-Stellen bezogen werden. Bestellnummer\
\ 1.03/d. \nEs ist ebenfalls unter www.ahv-iv.ch verfügbar.\n1.03-21/01-D"
- "3.07 Leistungen der AHV\nHörgeräte der AHV\nStand am 1. Januar 2023\f2Auf einen\
\ Blick\nWohnen Sie in der Schweiz und haben ein ärztlich festgestelltes Hörpro\
\ -\nblem, haben Sie Anspruch auf einen Kostenbeitrag der AHV an die An -\nschaffung\
\ eines Hörgerätes frühestens ab dem Zeitpunkt, an dem Sie eine \nAltersrente\
\ oder Ergänzungsleistungen beziehen und spätestens bei Errei -\nchen des Referenzalters.\
\ Sie können diesen Anspruch höchstens alle fünf \nJahre geltend machen. Voraussetzung\
\ ist, dass durch das Hörgerät eine \neindeutig bessere Verständigung mit der\
\ Umwelt erreicht werden kann. \nDer externe Teil von implantierbaren und knochenverankerten\
\ Geräten \n(Cochlea Implantate, BAHA, Soundbridge) ist einem Hörgerät prinzipiell\
\ \ngleichgestellt. Ist ein solches Gerät anstelle eines Hörgerätes medizinisch\
\ \nindiziert und notwendig, so kann sich die AHV für den externen Teil an \n\
den Kosten beteiligen. \nSind Sie Bezügerin oder Bezüger einer Altersrente und\
\ haben bereits Bei -\nträge der Invalidenversicherung an ein Hörgerät erhalten,\
\ haben Sie wei -\nterhin Anspruch auf die Leistungen der IV (vgl. Merkblatt 4.08\
\ - Hörgeräte \nder IV ). \nIhre Partnerin und Anlaufstelle ist die IV-Stelle.\
\ Wenden Sie sich mit Ihren \nFragen zu Beiträgen an Hörgeräte an die kantonale\
\ Durchführungsstelle \nder Invalidenversicherung. Die IV-Stelle hilft Ihnen weiter.\
\ Die Adresse Ihrer \nIV-Stelle finden Sie im Internet unter www.ahv-iv.ch . \n\
Bei einer erstmaligen Hörgeräteversorgung müssen Sie sich von einem Spe -\nzialarzt\
\ untersuchen lassen. Dieser Arzt erfasst das Hörproblem und erstellt \nzuhanden\
\ der IV-Stelle eine Expertise. Voraussetzung für einen finanziellen \nBeitrag\
\ der AHV ist, dass Sie auf beiden Ohren zusammengerechnet einen \nHörverlust\
\ von mindestens 35 % haben. Auf dieser Grundlage entscheidet \ndie IV-Stelle,\
\ ob Sie Anspruch auf einen finanziellen Beitrag haben. Erkun -\ndigen Sie sich\
\ bei der IV-Stelle, zu welchen Spezialärzten Sie gehen können. \nEs muss ein\
\ Facharzt für Hals-Nasen-Ohren-Heilkunde (HNO-Facharzt) \nsein, der von der IV\
\ als Expertenarzt anerkannt ist. Ohne Expertise eines \nanerkannten Facharztes\
\ bezahlt die AHV keine Beiträge an Hörgeräte. Im \nFalle von Wiederversorgungen\
\ im gleichen Umfang ist eine Expertise nicht \nmehr obligatorisch. Sie wird aber\
\ empfohlen und die Kosten der Expertise \nwerden von der AHV finanziert. \nWenn\
\ Sie schon pensioniert sind und/oder eine Rente der AHV erhalten, \nbezahlt Ihnen\
\ die AHV den finanziellen Beitrag an Ihr Hörgerät. Trotzdem \nist die IV-Stelle\
\ Ihre Anlaufstelle für Fragen zum Thema Hörgeräte. \f3Pauschalbetrag\n1 Welcher\
\ Betrag wird mir an ein Hörgerät ausgerichtet?\nSie erhalten einen festen Pauschalbetrag,\
\ ungeachtet der effek -\ntiven Kosten für die Hörgeräteversorgung. Die Pauschale\
\ beträgt \n630 Franken für ein Hörgerät und 1 237.50 Franken für zwei Hörgeräte.\
\ \nSie wurde so berechnet, dass sie 75 % der Kosten für ein einfaches und \n\
zweckmässiges Qualitätsprodukt sowie für fachmännische Anpassung und \nden Unterhalt\
\ deckt. Der Beitrag ist eine fixe Pauschale, unabhängig da -\nvon, ob Ihr Gerät\
\ mehr oder weniger kostet. Wenn Sie sich also für ein \nkostengünstiges Gerät\
\ entscheiden, können Sie die Differenz behalten. \nWenn Sie sich hingegen für\
\ ein teureres Gerät entscheiden, müssen Sie \nden Mehrbetrag selber aufbringen.\
\ \nSie können den Pauschalbetrag nur alle fünf Jahre beanspruchen, ausser \n\
ein HNO-Facharzt stellt schon vorher eine wesentliche Veränderung des \nHörvermögens\
\ fest. \nBei knochenverankerten und implantierten Hörhilfen kann die AHV für\
\ den \nexternen Teil (Sprachprozessor, Audioprozessor) 75 % der Kosten des je\
\ -\nweiligen Modells übernehmen.\nFreie Wahl des Hörgeräteanbieters\n2 Bei welchem\
\ Anbieter kann ich Hörgeräte bzw. Hörsys -\nteme beziehen?\nSie können die Hörgeräte\
\ bzw. die Hörsysteme bei allen qualifizierten An -\nbietern beziehen (in der\
\ Schweiz finden Sie Hörsystemakustikerinnen und \nHörsystemakustiker für individuell\
\ einstellbare Hörsysteme sowie Apothe -\nker und Apothekerinnen, Drogisten und\
\ Drogistinnen, Optiker und Optike -\nrinnen für voreingestellte Hörgeräte).\n\
Freie Wahl des Hörgerätes\n3 Welche Hörgeräte sind zugelassen?\nSie können das\
\ Hörgerät frei auswählen und es in der Schweiz oder im \nAusland kaufen, sofern\
\ es gemäss der Liste des Bundesamtes für Sozi -\nalversicherungen zugelassen\
\ ist. Sie können die Liste im Internet unter \nwww.ahv-iv.ch oder bei den IV-Stellen\
\ beziehen.\f4Antragstellung\n4 Wo reiche ich die Anmeldung für eine Hörgeräte-\
\ \nversorgung ein?\nSie müssen ein Anmeldeformular ausfüllen, um von der AHV\
\ ein Hörgerät \nzu erhalten. Reichen Sie das Formular 009.001 - Anmeldung: Hilfsmittel\
\ \nder AHV bei der IV-Stelle des Wohnsitzkantons ein. Sie erhalten das An-\n\
meldeformular bei allen Ausgleichskassen und ihren Zweigstellen, bei den \nIV-Stellen\
\ oder im Internet unter www.ahv-iv.ch .\nAbklärung und Ausrichtung der Pauschale\n\
5 Wer prüft den Anspruch auf einen Pauschalbetrag?\nDie zuständige kantonale IV-Stelle\
\ prüft, gestützt auf die Diagnose des \nHNO-Facharztes, ob die Voraussetzungen\
\ für den Anspruch auf einen \nPauschalbetrag für eine Hörgeräteversorgung erfüllt\
\ sind. Die IV-Stelle er -\nlässt anschliessend im sogenannten formlosen Verfahren\
\ eine Mitteilung. \nIst eine Verfügung zu erlassen, so ist die Ausgleichskasse\
\ des Kantons, in \nwelchem die IV-Stelle ihren Sitz hat, dafür zuständig. \n\
6 Wie erhalte ich den Pauschalbetrag?\nReichen Sie das Rechnungsformular, welches\
\ Sie von der IV erhalten \nhaben, ausgefüllt bei dieser ein. Legen Sie dem Formular\
\ die Kopie der \nRechnung des Hörgeräteverkäufers bei. Diese muss alle Informationen\
\ \nenthalten, die auf der Rückseite des Rechnungsformulars aufgeführt sind.\
\ \f5Fachverbände und Organisationen\nWeitere Informationen erhalten Sie von den\
\ folgenden Fachverbänden und \nOrganisationen:\nwww.akustika.ch \nSchweizerischer\
\ Fachverband der Hörgeräteakustik \nOberneuhofstrasse 3 \n6340 Baar \nTel.\
\ 041 750 90 00\nwww.hörsystemakustik.ch \nHörsystemakustik Schweiz \nSeilerstrasse\
\ 22 \n3001 Bern \nTel. 031 310 20 31\nwww.pro-audito.ch / www.neutrale-hörberatung.ch\
\ \npro audito schweiz \nFeldeggstrasse 69 \n8008 Zürich \nTel. 044 363 12\
\ 00, Neutrale Hörberatung 0800 400 333\nwww.ecoute.ch \nforom écoute \nAvenue\
\ Général-Guisan 117 \n1009 Pully \nTel. 0800 614 614 \nwww.atidu.ch \nAssociazione\
\ per persone con problemi d’udito \nSalita Mariotti 2 \n6500 Bellinzona \n\
Tel. 091 857 15 32\nwww.orl-hno.ch \nSchweizerische Gesellschaft für Oto-Rhino-Laryngologie,\
\ \nHals- und Gesichtschirurgie \nGeschäftsstelle \nIMK Institut für Medizin\
\ und Kommunikation AG \nMünsterberg 1 \n4001 Basel \nTel. 061 561 53 53\f\
6Auskünfte und weitere \nInformationen\nDieses Merkblatt vermittelt nur eine\
\ Übersicht. Für die Beurteilung von \nEinzelfällen sind ausschliesslich die gesetzlichen\
\ Bestimmungen mass -\ngebend. Die Ausgleichskassen, ihre Zweigstellen und die\
\ IV-Stellen ge -\nben gerne Auskunft. Ein Verzeichnis aller Ausgleichskassen\
\ finden Sie \nunter www.ahv-iv.ch .\nHerausgegeben von der Informationsstelle\
\ AHV/IV in Zusammenarbeit \nmit dem Bundesamt für Sozialversicherungen.\nNachdruck\
\ November 2024. Auch auszugsweiser Abdruck ist nur mit \nschriftlicher Einwilligung\
\ der Informationsstelle AHV/IV erlaubt. \nDieses Merkblatt kann bei den Ausgleichskassen\
\ und deren Zweig- \nstellen sowie den IV-Stellen bezogen werden. Bestellnummer\
\ 3.07/d. \nEs ist ebenfalls unter www.ahv-iv.ch verfügbar.\n3.07-23/01-D"
- "1.2025 Allgemeines\nÄnderungen auf \n1. Januar 2025\nStand am 1. Januar 2025\f\
2Übersicht\nDieses Merkblatt informiert Sie über die Änderungen auf 1. Januar\
\ 2025 \nbei den Beiträgen und Leistungen.\n Randziffern\nBeiträge \
\ 1-4\nLeistungen der AHV 5-6\nLeistungen der IV 7-9\nErgänzungsleistungen\
\ (EL) und \nÜberbrückungsleistungen für ältere Arbeitslose (ÜL) 10-11\n\
Berufliche Vorsorge (bV) 12\nFamilienzulagen (FamZ) 13-14\f3Beiträge\n\
1 Beiträge der Arbeitgeber und Arbeitnehmer\nNeu sind auf Löhnen unter 2 500 Franken\
\ nur Beiträge zu bezahlen, wenn \ndies die Arbeitnehmenden verlangen (bisher\
\ 2 300 Franken). \n2 Beiträge der Selbständigerwerbenden\nDer Mindestbeitrag\
\ wird von 514 Franken auf 530 Franken erhöht. Die \nbetragliche Höchstlimite\
\ der sinkenden Beitragsskala für Selbständigerwer -\nbende liegt neu bei 60 500\
\ Franken (bisher 58 800 Franken). Die untere \nEinkommensgrenze wird auf 10 100\
\ Franken erhöht (bisher 9 800 Franken).\nDie sinkende Beitragsskala für Selbständigerwebende\
\ \nab 1. Januar 2025\nJährliches Erwerbseinkommen in CHF AHV/IV/EO-Beitrags\
\ -\nsatz in % des \nErwerbseinkommens von mindestens aber weniger als\n10 100\
\ 17 600 5.371\n17 600 23 000 5.494\n23 000 25 500 5.617\n25 500 28 000 5.741\n\
28 000 30 500 5.864\n30 500 33 000 5.987\n33 000 35 500 6.235\n35 500 38 000 6.481\n\
38 000 40 500 6.728\n40 500 43 000 6.976\n43 000 45 500 7.222\n45 500 48 000 7.469\n\
48 000 50 500 7.840\n50 500 53 000 8.209\n53 000 55 500 8.580\n55 500 58 000 8.951\n\
58 000 60 500 9.321\n60 500 10.000\nSelbstständige Einkommen im Nebenerwerb\
\ unterliegen künftig der Bei -\ntragspflicht erst ab einem Betrag von 2 500 Franken\
\ (bisher 2 300 Franken). \f43 Beiträge der Nichterwerbstätigen \nVermögen und\
\ mit 20 verviel -\nfachtes jährliches Rentenein -\nkommenAHV/IV/EO-Beiträge im\n\
Jahr Monat\nunter CHF 350 000.00 530.00 44.20\nab CHF 350 000.00 636.00 53.00\n\
\ 400 000.00 742.00 61.80\n 450 000.00 848.00 70.70\n 500 000.00\
\ 954.00 79.50\n 550 000.00 1 060.00 88.30\n 600 000.00 1 166.00 97.20\n\
\ 650 000.00 1 272.00 106.00\n 700 000.00 1 378.00 114.80\n \
\ 750 000.00 1 484.00 123.70\n 800 000.00 1 590.00 132.50\n 850\
\ 000.00 1 696.00 141.30\n 900 000.00 1 802.00 150.20\n 950 000.00\
\ 1 908.00 159.00\n 1 000 000.00 2 014.00 167.80\n 1 050 000.00 2 120.00\
\ 176.70\n 1 100 000.00 2 226.00 185.50\n 1 150 000.00 2 332.00 194.30\n\
\ 1 200 000.00 2 438.00 203.20\n 1 250 000.00 2 544.00 212.00\n 1 300\
\ 000.00 2 650.00 220.80\n 1 350 000.00 2 756.00 229.70\n 1 400 000.00\
\ 2 862.00 238.50\n 1 450 000.00 2 968.00 247.30\n 1 500 000.00 3 074.00\
\ 256.20\n 1 550 000.00 3 180.00 265.00\n 1 600 000.00 3 286.00 273.80\n\
\ 1 650 000.00 3 392.00 282.70\n 1 700 000.00 3 498.00 291.50\n 1 750\
\ 000.00 3 604.00 300.30\n 1 800 000.00 3 763.00 313.60\n 1 850 000.00\
\ 3 922.00 326.80\n... ... ...\n8 900 000.00 26 341.00 2 195.10\n8 950 000.00\
\ 26 500.00 2 208.30\f5Der jährliche AHV/IV/EO-Mindestbeitrag für Nichterwerbstätige\
\ beträgt \nneu 530 Franken (bisher 514 Franken). Der jährliche AHV/IV/EO-Höchstbei\
\ -\ntrag für Nichterwerbstätige entspricht 50 Mal dem Mindestbeitrag und be -\n\
trägt neu 26 500 Franken (bisher 25 700 Franken). Zwischen diesen Werten \nsteigen\
\ die Beiträge stufenweise an. Diese Stufen entsprechen dem Vermö -\ngen und dem\
\ um 20 vervielfachten jährlichen Renteneinkommen. Die erste \ndieser Stufen beginnt\
\ neu bei 350 000 Franken (bisher 340 000 Franken).\nNichterwerbstätige Ehefrauen\
\ und Ehemänner sind grundsätzlich von der \nBeitragspflicht befreit, sofern der\
\ Ehegatte oder die Ehegattin bei der AHV \nals Erwerbstätiger oder Erwerbstätige\
\ gilt und mindestens den doppelten \nMindestbeitrag, also 1 060 Franken pro Kalenderjahr,\
\ entrichtet.\n4 Freiwillige Versicherung\nDer Mindestbeitrag an die freiwillige\
\ Versicherung beträgt neu 1 010 Fran -\nken (bisher 980 Franken). Die Obergrenze\
\ erhöht sich von 24 500 Franken \nauf 25 250 Franken. \nWer die Schweiz verlässt,\
\ ist nicht mehr obligatorisch versichert. Wer der \nfreiwilligen Alters-, Hinterlassenen-\
\ und Invalidenversicherung beitritt, \nführt den Versicherungsschutz lückenlos\
\ weiter. Weitere Informationen zu \nden Fristen finden Sie im Merkblatt 10.02\
\ – Freiwillige Alters-, Hinterlasse -\nnen- und Invalidenversicherung .\f6Leistungen\
\ der AHV \n5 Renten\nRenten der AHV Minimal- \nrenteMaximal -\nrente\nz. B.\
\ Skala 44 in CHF pro Monat\nAltersrente 1 260 2 520\nHöchstbetrag der beiden\
\ Renten \neines Ehepaares3 780\nWitwen-/Witwerrente 1 008 2 016\nZusatzrente\
\ für Ehefrauen, die 1941 \noder früher geboren sind bzw. für Ehe -\ngatten, für\
\ die zuvor eine Zusatzrente \nder IV ausgerichtet wurde378 756\nWaisen- und Kinderrente\
\ 504 1 008\nHöchstbetrag bei gleichzeitigem An -\nspruch auf zwei Kinderrenten\
\ oder eine \nKinder- und eine Waisenrente für das \ngleiche Kind1 512\n6 Hilflosenentschädigung\n\
Hilflosenentschädigung der AHV in CHF pro Monat\nbei Hilflosigkeit leichten Grades\
\ (zu Hause) 252\nbei Hilflosigkeit mittleren Grades 630\nbei Hilflosigkeit schweren\
\ Grades 1 008\f7Leistungen der IV \n7 Renten \nBei einem Invaliditätsgrad ab\
\ 70 Prozent besteht Anspruch auf eine ganze \nIV-Rente.\nGanze ordentliche IV-Vollrente.\n\
Mindestrente 1 260.00 CHF pro Monat\nMaximalrente 2 520.00 CHF pro Monat\nDie\
\ Kinderrente beträgt jeweils 40 % der IV-Rente der anspruchsberech -\ntigten\
\ Person.\n8 Hilflosenentschädigung der IV \nHilflosenentschädigung IV\nHilflosigkeit\
\ im Heim im eigenen Zuhause\nCHF pro Monat CHF pro Monat\nleichten Grades 126\
\ 504\nmittleren Grades 315 1 260\nschweren Grades 504 2 016\nHilflosenentschädigung\
\ IV für Minderjährige\nHilflosigkeit CHF pro Tag CHF pro Monat\nleichten Grades\
\ 16.80 504\nmittleren Grades 42.00 1 260\nschweren Grades 67.20 2 016\nIntensivpflegezuschlag\
\ für Minderjährige\nBetreuungsaufwand Intensivpflegezuschlag\nCHF pro Tag CHF\
\ pro Monat\nmindestens 4 Stunden 33.60 1 008\nmindestens 6 Stunden 58.80 1 764\n\
mindestens 8 Stunden 84.00 2 520\f89 Assistenzbeitrag\nDer Assistenzbeitrag beträgt\
\ 35.30 Franken pro Stunde. \nMuss die Assistenzperson für die benötigten Hilfeleistungen\
\ aufgrund der \nBeeinträchtigung der versicherten Person über besondere Qualifikationen\
\ \nverfügen, so beträgt der Assistenzbeitrag 52.95 Franken pro Stunde.\nDer Ansatz\
\ für den Nachtdienst wird im Einzelfall und nach Intensität der \nzu erbringenden\
\ Hilfeleistung festgelegt. Er beträgt jedoch höchstens \n169.10 Franken pro Nacht.\f\
9Ergänzungsleistungen der AHV und IV (EL) und \nÜberbrückungsleistungen für ältere\
\ Arbeitslose (ÜL)\n10 Betrag für den allgemeinen Lebensbedarf\nin CHF pro Jahr\n\
für Alleinstehende 20 670.–\nfür Ehepaare 31 005.–\nRentenberechtigte Waisen und\
\ Kinder, die einen Anspruch auf eine \nKinderrente der AHV oder IV begründen\n\
0 - 10 Jahre 11 - 25 Jahre\nfür das erste Kind 7 590.– 10 815.–\nfür das zweite\
\ Kind 6 325.– 10 815.–\nfür das dritte Kind 5 270.– 7 210.–\nfür das vierte Kind\
\ 4 390.– 7 210.–\nfür jedes weitere Kind 3 660.– 3 605.–\n11 Mietzins \nDie Mietzinsmaxima\
\ richten sich nach Haushaltsgrösse und Region.\nMietzins- \nregion1 1 \n(Grosszent\
\ -\nrum)Mietzins- \nregion1 2 \n(Stadt)Mietzins \nregion1 3 \n(Land)\nAlleinlebend\
\ CHF 18 900.– CHF 18 300.– CHF 16 680.–\nEhepaar ohne Kinder / \nAlleinstehend\
\ mit einem \nKindCHF 22 320.– CHF 21 720.– CHF 20 160.–\nEhepaar mit einem Kind\
\ \n/ Alleinstehend mit zwei \nKindernCHF 24 780.– CHF 23 760.– CHF 22 200.–\n\
Ehepaar mit zwei und \nmehr Kindern / Alleinste -\nhend mit drei und mehr \nKindernCHF\
\ 27 060.– CHF 25 920.– CHF 24 000.–\nKonkubinatspaare (Zwei -\npersonenhaushalt)\
\ pro \nPersonCHF 11 160.– CHF 10 860.– CHF 10 080.–\nWeitere Informationen dazu\
\ finden Sie dem Merkblatt 5.01 – Ergänzungs -\nleistungen zur AHV und IV und\
\ 5.03 – Überbrückungsleistung für ältere \nArbeitslose . \f10Berufliche Vorsorge\
\ (bV)\n12 Der obligatorischen Versicherung unterstellte Löhne\nGrenzbeträge in\
\ der obligatorischen beruflichen Vorsorge in CHF\nMindestjahreslohn 22 680\n\
minimaler koordinierter Jahreslohn 3 780\nKoordinationsabzug 26 460\nobere Limite\
\ des Jahreslohnes 90 720\f11Familienzulagen (FZ)\n13 Neue Eckwerte \nEinkommen\
\ für Anspruch auf \nFamilienzulagenim Jahr \nin CHFim Monat \nin CHF \nMindesteinkommen\
\ für Anspruch \nauf FZ für Erwerbstätige \n(halbe minimale volle AHV-Rente)7\
\ 560 630\nMaximales Einkommen des Kindes für \nAnspruch auf Ausbildungszulagen\
\ \n(maximale volle AHV-Rente)30 240 2 520\nMaximales steuerbares Einkommen für\
\ \nAnspruch auf FZ für Nichterwerbstätige \n(anderthalbe maximale volle AHV-Rente)45\
\ 360 3 780\n14 Neue Mindestansätze für Kinder- und \nAusbildungszulagen\nDie\
\ Kinderzulage beträgt mindestens 215 Franken pro Monat (bisher 200 \nFranken).\n\
Die Ausbildungszulage beträgt mindestens 268 Franken pro Monat (bisher \n250 Franken).\f\
12Auskünfte und weitere \nInformationen\n1.2025-25/01-DDieses Merkblatt vermittelt\
\ nur eine Übersicht. Für die Beurteilung \nvon Einzelfällen sind ausschliesslich\
\ die gesetzlichen Bestimmungen \nmassgebend. Die Ausgleichskassen und ihre Zweigstellen\
\ geben gerne \nAuskunft. Ein Verzeichnis aller Ausgleichskassen finden Sie unter\
\ \nwww.ahv-iv.ch .\nHerausgegeben von der Informationsstelle AHV/IV in Zusammenarbeit\
\ \nmit dem Bundesamt für Sozialversicherungen.\nAusgabe November 2024. Auch auszugsweiser\
\ Abdruck ist nur mit \nschriftlicher Einwilligung der Informationsstelle AHV/IV\
\ erlaubt.\nDieses Merkblatt kann bei den Ausgleichskassen und deren Zweig- \n\
stellen sowie den IV-Stellen bezogen werden. Bestellnummer 1.2025/d. \nEs ist\
\ ebenfalls unter www.ahv-iv.ch verfügbar."
- source_sentence: Welche Leistungen bietet die AHV für Personen aus Nichtvertragsstaaten?
sentences:
- "2.09 Cotisations\nStatut des indépendants\ndans les assurances\nsociales suisses\n\
État au 1er janvier 2025\f2En bref\nCe mémento fournit des informations sur les\
\ cotisations que doivent verser \naux assurances sociales suisses les personnes\
\ ayant le statut d’indépen -\ndant, ainsi que sur les prestations auxquelles\
\ elles ont droit. \nC’est aux caisses de compensation qu’il appartient de décider\
\ si quelqu’un \na le statut d’indépendant au sens du droit des assurances sociales.\
\ \nLe mémento 2.02 - Cotisations des indépendants à l’AVS, à l’AI et aux APG\
\ \net la site www.independant-suisse.ch fournit des informations sur les dif\
\ -\nférences entre une activité lucrative indépendante et une activité salariée.\n\
Assurance-vieillesse et survivants (AVS), \nassurance-invalidité (AI) et \n\
allocations pour perte de gain (APG)\n1 Quand dois-je cotiser à l’AVS, à l’AI\
\ et aux APG ?\nSi vous exercez une activité lucrative indépendante en Suisse,\
\ vous devez \nverser des cotisations à l’AVS, à l’AI et aux APG. Le revenu de\
\ votre activité \nindépendante pris en compte pour la taxation de l’impôt fédéral\
\ direct sert \nde base au calcul des cotisations. Les autorités fiscales communiquent\
\ le \nrevenu net, c’est-à-dire le revenu avant rajout des cotisations personnelles\
\ \nà l’AVS, à l’AI et aux APG. Les caisses de compensation déduisent de ce \n\
revenu l’intérêt calculé sur le capital propre investi dans l’entreprise ainsi\
\ \nqu’une éventuelle franchise de cotisation. Elles appliquent ensuite une for\
\ -\nmule à ce résultat pour le ramener au montant avant déduction.\nEn tant qu’indépendant,\
\ vous cotisez à hauteur de 10 % du revenu ainsi \ncalculé. Si votre revenu n’atteint\
\ pas le seuil fixé par le Conseil fédéral, le \ntaux de cotisation applicable\
\ est fixé selon un barème dégressif. \nLes caisses de compensation prélèvent\
\ en outre des contributions aux frais \nd’administration qui ne dépassent pas\
\ 5 % des cotisations à l’AVS, à l’AI \net aux APG.\nLe mémento 2.02 - Cotisations\
\ des indépendants à l’AVS, à l’AI et aux APG \nfournit de plus amples renseignements\
\ sur le calcul et la perception des \ncotisations. Il est disponible sur www.avs-ai.ch\
\ . \f32 Comment sont calculées les prestations de l’AVS ou \nde l’AI ?\nLe calcul\
\ des prestations de l’AVS et de l’AI est le même pour les salariés et \npour\
\ les indépendants.\nLes mémentos 3 - Prestations de l’AVS et 4 - Prestations\
\ de l’AI publiées par le \nCentre d’information AVS/AI fournissent des renseignements\
\ plus précis sur ce \npoint. Tous les mémentos sont disponibles sur www.avs-ai.ch\
\ . \n3 Comment calcule-t-on les APG ?\nLe revenu acquis avant l’entrée en service\
\ sert de base au calcul de l’APG. \nLorsque les conditions requises sont remplies,\
\ vous avez droit, en tant \nqu’indépendant, à une allocation d’exploitation en\
\ plus des APG. \nVous trouverez de plus amples renseignements à ce sujet dans\
\ le mémento \n6.01 - Allocations pour perte de gain .\n4 Comment calcule-t-on\
\ les allocations de maternité et \nde l’autre parent, les allocations de prise\
\ en charge et \nles allocations d’adoption ?\nLes principes régissant le calcul\
\ des allocations de maternité, de l’autre pa -\nrent (le père ou l’épouse de\
\ la mère), de prise en charge et d’adoption aux \nsalariés s’appliquent aussi\
\ aux indépendants. \nVous trouverez de plus amples renseignements à ce sujet\
\ dans les mé -\nmentos 6.02 - Allocation de maternité , 6.04 - Allocation à l’autre\
\ parent \n(le père ou l’épouse de la mère), 6.10 - Allocation de prise en\
\ charg e, \n6.11 - Allocation d’adoption .\n5 Quels sont les organes d’exécution\
\ ?\nVotre interlocuteur est la caisse de compensation de votre canton ou de \n\
votre association professionnelle. Vous trouverez la liste complète des \ncaisses\
\ de compensation sur www.avs-ai.ch .\n6 Les cotisations à l’AVS, à l’AI et aux\
\ APG sont-elles \ndéductibles des impôts ?\nEn tant qu’indépendant, vous pouvez\
\ déduire l’ensemble des cotisations \npersonnelles versées en vue de l’acquisition\
\ du droit aux prestations de \nl’AVS, de l’AI et des APG du résultat d’exploitation\
\ au titre de charges jus -\ntifiées par l’usage commercial. \f4Les cotisations\
\ que vous versez à l’AVS, à l’AI, aux APG et à l’AC en tant \nqu’employeur en\
\ faveur de vos employés peuvent également être intégra -\nlement déduites du\
\ résultat d’exploitation comme charges justifiées par \nl’usage commercial.\n\
7 Les prestations de l’AVS, de l’AI et du régime APG \nsont-elles imposables\
\ ?\nLa pleine déduction des cotisations a comme pendant la pleine imposition\
\ \ndes prestations. Les prestations de l’AVS et de l’AI ainsi que les APG sont\
\ \nimposées intégralement. \nCependant, certaines prestations sont exonérées\
\ d’impôt, notamment :\n• les prestations d’assistance provenant de fonds publics\
\ (allocations \npour impotent par ex.) et privés, \n• la solde du service militaire\
\ et l’indemnité de fonction pour service de \nprotection civile, \n• l’argent\
\ de poche des personnes astreintes au service civil, \n• les prestations complémentaires.\n\
Allocations familiales (LAFam / LFA)\n8 Suis-je assujetti à la loi fédérale sur\
\ les allocations \nfamiliales (LAFam) ?\nOui. En tant que personne exerçant\
\ une activité indépendante en Suisse, \nvous êtes soumis/e à la loi fédérale\
\ sur les allocations familiales (LAFam). \nVous devez donc vous affilier à une\
\ caisse de compensation pour alloca -\ntions familiales (CAF). En règle générale,\
\ les CAF sont gérées par les caisses \nde compensation.\n9 Suis-je assujetti\
\ à la loi fédérale sur les allocations \nfamiliales dans l’agriculture (LFA)\
\ ?\nNon. Selon la loi fédérale sur les allocations familiales dans l’agriculture\
\ \n(LFA), les agriculteurs indépendants ne sont pas tenus de verser des coti\
\ -\nsations pour les allocations familiales. Vous trouverez de plus amples infor\
\ -\nmations à ce propos dans le mémento 6.09 - Allocations familiales dans \n\
l’agriculture .\f510 Quel est le montant des cotisations et des prestations ?\n\
En tant qu’indépendant, vous devez payer à votre CAF des cotisations sur \nvotre\
\ revenu, le revenu soumis à cotisation étant plafonné à 148 200 francs \npar\
\ année. Les taux de cotisation diffèrent selon les cantons et les CAF. \nVous\
\ avez droit à des allocations familiales, dont des allocations pour en -\nfant\
\ d’au moins 215 francs et des allocations de formation d’au moins 268 \nfrancs,\
\ par enfant et par mois. Plusieurs cantons ont prévu des montants \nplus élevés,\
\ ainsi que des prestations additionnelles comme une allocation \nde naissance\
\ ou une allocation d’adoption. \nVous trouverez de plus amples renseignements\
\ à ce sujet dans le mémento \n6.08 - Allocations familiales .\nAssurance-chômage\
\ (AC)\n11 Puis-je m’affilier à l’assurance-chômage ?\nNon. Les indépendants ne\
\ peuvent pas s’affilier à l’assurance-chômage et \nne sont par conséquent pas\
\ assurés contre le chômage. Vous trouverez de \nplus amples informations à ce\
\ propos dans le mémento 2.08 - Cotisations \nà l’assurance-chômage .\nPrévoyance\
\ professionnelle (2e pilier)\n12 Suis-je assujetti à la prévoyance professionnelle\
\ \nobligatoire en tant qu’indépendant ? \nNon, en tant qu’indépendant, vous\
\ n’êtes pas soumis à la prévoyance pro -\nfessionnelle obligatoire (loi fédérale\
\ sur la prévoyance professionnelle, vieil -\nlesse, survivants et invalidité\
\ [LPP]).\n13 Puis-je m’affilier à l’assurance facultative ?\nOui, en tant que\
\ personne exerçant une activité professionnelle indépen -\ndante, vous pouvez\
\ vous assurer dans la prévoyance professionnelle à titre \nfacultatif afin de\
\ vous constituer un capital-retraite et de vous prémunir \ncontre les risques\
\ d’invalidité et de décès (art. 4 LPP). Différents choix sont \npossibles (voir\
\ ch. 14 à 20).\f614 Puis-je m’affilier à une institution de prévoyance d’une\
\ \nassociation professionnelle ou de branche ?\nOui. Vous pouvez également vous\
\ faire assurer auprès de l’institution de \nprévoyance dont vous relevez en raison\
\ de votre profession (art. 44, al. 1, \nLPP). De nombreuses associations professionnelles\
\ ou de branche vous \noffrent, en tant qu’indépendant, la possibilité de vous\
\ affilier à leurs ins -\ntitutions de prévoyance (fondations communes le plus\
\ souvent). C’est le \ncas pour plusieurs professions libérales (avocats, médecins\
\ ou musiciens \nindépendants, par ex.) et pour de nombreuses professions des\
\ arts et mé -\ntiers, avec la fondation « proparis Prévoyance arts et métiers\
\ Suisse », par \nexemple. De plus, les associations patronales, les chambres\
\ de commerce \net d’industrie et d’autres organismes peuvent fournir des renseignements\
\ \nsur les possibilités d’affiliation en fonction de votre profession.\nEn plus\
\ du plan minimal correspondant à la prévoyance obligatoire des sa -\nlariés,\
\ plusieurs institutions de prévoyance offrent des plans de prévoyance \navec\
\ une couverture plus étendue (prévoyance surobligatoire). Ces plans \noffrent\
\ des prestations supplémentaires comme une rente plus élevée ou \nune meilleure\
\ couverture des risques, et prélèvent en conséquence des \ncotisations plus élevées.\
\ Vous obtiendrez des informations plus détaillées \nà ce sujet auprès de l’association\
\ professionnelle concernée ou de l’institu -\ntion de prévoyance. En tant qu’indépendant,\
\ vous avez aussi la possibilité \nde vous assurer uniquement auprès d’une institution\
\ de prévoyance active \ndans le domaine de prévoyance étendue, et notamment auprès\
\ d’une ins -\ntitution de prévoyance non inscrite au registre de la prévoyance\
\ profession -\nnelle.\n15 Puis-je m’affilier à l’institution supplétive ?\nSi\
\ vous n’êtes pas soumis/e à la prévoyance obligatoire et que vous n’avez \npas\
\ accès à une autre institution de prévoyance (art. 44, al. 2, LPP), vous \navez\
\ le droit de vous assurer auprès de l’institution supplétive.\nCette dernière\
\ dispose d’une agence dans chacune des trois grandes ré -\ngions linguistiques\
\ (voir mémento 6.06 - Obligation de s’affilier à une ins -\ntitution de prévoyance\
\ conformément à la LPP ). La fondation institution \nsupplétive vous offre la\
\ possibilité d’adhérer à un plan de prévoyance dont \nla couverture est équivalente\
\ à celle de la prévoyance professionnelle obli -\ngatoire minimale des salariés.\n\
Le revenu assurable correspond au salaire coordonné des salariés soumis \nà la\
\ prévoyance obligatoire (conformément à l’art. 8 LPP, la partie du sa -\nlaire\
\ annuel située entre 26 460 et 90 720 francs doit être assurée). Vous \npouvez\
\ demander que la part du revenu soumis à l’AVS comprise entre \f790 720 francs\
\ et le maximum du salaire prévu dans l’assurance-accidents \n(148 200 francs\
\ par an) soit assurée dans le cadre d’une prévoyance plus \nétendue.\nLe site\
\ Internet de l’institution supplétive renseigne sur les mon -\ntants correspondants\
\ et donne des exemples de calcul des prestations \n(www.aeis. ch).\n16 Dois-je\
\ m’affilier à une institution de prévoyance si \nj’emploie des salariés ?\nOui.\
\ Si, en tant qu’indépendant, vous employez des salariés soumis à l’as -\nsurance\
\ obligatoire, vous devez être affilié à une institution de prévoyance \ninscrite\
\ au registre de la prévoyance professionnelle (art. 11, al. 1, LPP). Les \npersonnes\
\ que vous employez sont assurées obligatoirement auprès de \ncette institution.\
\ Vous pouvez vous-même vous affilier à l’institution de \nprévoyance qui assure\
\ vos employés (art. 44, al. 1, LPP) et ainsi bénéficier \ndes mêmes prestations\
\ de prévoyance.\n17 Quelles sont les autres solutions proposées par les \nassurances\
\ et les banques (3e pilier) ?\nLes assurances et les banques vous proposent différentes\
\ possibilités de \nprévoyance vieillesse dans le cadre du 3e pilier (pilier\
\ 3a ou prévoyance \nindividuelle liée). Ces solutions se déclinent sous la forme\
\ d’une épargne- \nretraite pure ou d’une épargne-retraite combinée à une couverture\
\ d’assu -\nrance. Dans ce dernier cas, les primes peuvent varier à la fois en\
\ fonction \nde l’étendue de la couverture des risques (invalidité et décès) et\
\ des offres \ndes sociétés d’assurance.\nIl en est de même pour les diverses\
\ formes de placement des capitaux, telles \nque les fonds mixtes composés par\
\ exemple d’obligations et d’actions. Les \nproduits de placement proposés sont\
\ très divers, si bien qu’il peut exister \nde grandes disparités dans leurs perspectives\
\ de rendement et leurs risques. \nAvant de porter votre choix sur un produit\
\ donné, il est conseillé de procé -\nder à un examen approfondi de l’offre, en\
\ tenant compte de vos besoins. \n18 Quelles sont les prestations de prévoyance\
\ ?\nLe but principal de la prévoyance professionnelle est d’offrir à l’assuré\
\ \nune rente de vieillesse qui s’ajoute à celle de l’AVS une fois l’âge de ré\
\ -\nférence atteint, afin qu’il dispose d’un revenu suffisant après l’arrêt de\
\ \nl’activité professionnelle. Le montant de la rente versée dépend principale\
\ -\nment du capital disponible au moment de la retraite. Ce capital est consti\
\ -\ntué des cotisations versées au fil des ans et de l’intérêt.\f8La plupart\
\ des plans de prévoyance incluent des prestations en cas d’invali -\ndité et\
\ des prestations aux survivants en cas de décès de l’assuré. L’étendue \nde ces\
\ prestations est définie dans le règlement de prévoyance du plan ou \nde l’institution.\n\
19 Quelles cotisations versées à l’institution de prévoyance \nprofessionnelle\
\ sont déductibles des impôts ?\nLes cotisations que vous versez en tant qu’employeur\
\ à l’institution de pré -\nvoyance pour vos employés sont considérées comme des\
\ charges d’exploi -\ntation et peuvent être intégralement déduites du résultat\
\ de l’entreprise \n(art. 81 LPP et art. 27, al. 2, let. c, LIFD). \nLes cotisations\
\ que vous avez versées en tant qu’indépendant pour \nvotre propre prévoyance\
\ professionnelle ne peuvent être considérées \ncomme des charges d’exploitation\
\ qu’à concurrence de la « part de \nl’employeur », c’est-à-dire de la part que\
\ vous versez en tant qu’employeur \npour la prévoyance de votre personnel. Les\
\ cotisations que vous versez \nen tant qu’indépendant et qui représentent la\
\ « part de l’employé » pro -\nviennent des fonds privés et ne peuvent être prises\
\ en compte que dans \nles déductions générales, mais ne peuvent pas être déduites\
\ du résultat \nd’exploitation de l’entreprise. Si vous n’avez pas d’employés,\
\ 50 % au plus \ndes cotisations versées valent comme « part de l’employeur ».\n\
Les cotisations que vous versez au 3e pilier au titre de la prévoyance \nindividuelle\
\ liée sont également déductibles du revenu, mais dans les limites \nfixées par\
\ l’art. 7 OPP 3. Si en tant qu’indépendant, vous n’êtes affilié à au -\ncune\
\ caisse de pension du 2e pilier, la limite des cotisations annuelles déduc -\n\
tibles est fixée à 20 % du revenu annuel, et au maximum à 36 288 francs (le \n\
plafond pour les indépendants et les salariés affiliés à une caisse de \npension\
\ étant fixé actuellement à 7 258 francs).\n20 Quelles sont les prestations de\
\ prévoyance \nprofessionnelle imposables ?\nLes prestations de la prévoyance\
\ professionnelle versées sous forme \nde rente sont additionnées aux autres revenus\
\ imposables et in -\ntégralement imposées à ce titre. Les prestations de la prévoyance\
\ \nprofessionnelle versées sous forme de capital sont imposées sé -\nparément\
\ des autres revenus et soumises à un impôt annuel en -\ntier, sur la base d’un\
\ taux réduit. Pour l’impôt fédéral direct, ce taux \ncorrespond à un cinquième\
\ des barèmes fiscaux réguliers.\n \f9Assurance-accidents\n21 Puis-je m’affilier\
\ à l’assurance facultative ?\nEn tant qu’indépendant, vous n’êtes pas automatiquement\
\ assuré contre \nles accidents en Suisse*. La loi fédérale sur l’assurance-accidents\
\ (LAA) pré -\nvoit cependant que vous puissiez contracter une assurance-accidents\
\ selon \nla LAA à titre facultatif, pour vous-même ainsi que pour les membres\
\ de \nvotre famille qui travaillent avec vous, à condition d’être domicilié en\
\ Suisse. \nAu sens de la LAA, sont considérés comme indépendants les travailleurs\
\ \nqui ne sont pas salariés, et comme salariés ceux qui perçoivent un salaire\
\ \ndéterminant selon la loi sur l’AVS. Il est aussi possible de travailler en\
\ partie \ncomme indépendant et en partie comme salarié. Dans ce cas, vous avez\
\ \négalement la possibilité de contracter une assurance à titre facultatif.\n\
En outre, vous pouvez souscrire une telle assurance lorsque vous avez \natteint\
\ l’âge de référence et que vous étiez assuré à titre obligatoire durant \nl’année\
\ précédant le départ à la retraite.\nPar contre, si vous n’exercez aucune activité\
\ lucrative et ne faites qu’ \nemployer du personnel de maison, vous ne pouvez\
\ pas vous assurer à titre \nfacultatif.\n* En revanche, l’assurance obligatoire\
\ des soins prend en charge les frais de guérison en cas \nd’accident également.\n\
22 Comment les primes sont-elles calculées ?\nLes primes d’assurance-accidents\
\ facultative sont cal -\nculées en fonction du gain assuré convenu lors de la\
\ \nsignature du contrat, qui peut être modifié au début de chaque année \ncivile.\
\ Si vous exercez une activité lucrative indépendante, ce montant \nne peut pas\
\ être inférieur à 45 % du montant maximum du gain assuré \n(148 200 francs dès\
\ le 1er janvier 2016). Pour les membres de votre famille, \nil ne peut pas être\
\ inférieur à 30 % de ce montant.\nLes primes se composent d’une prime nette dépendant\
\ du risque et de \nsuppléments pour les frais administratifs. Dans l’assurance\
\ facultative, il \nn’est prélevé aucun supplément de primes pour les allocations\
\ de renché -\nrissement ni pour les mesures de prévention des accidents et maladies\
\ pro -\nfessionnels et des accidents non professionnels.\f1023 Quelles sont les\
\ prestations assurées en vertu \nde la LAA ?\nLes dispositions sur l’assurance\
\ obligatoire s’appliquent par analogie à \nl’assurance facultative. Les prestations\
\ suivantes sont assurées : \n• prestations pour soins ;\n• remboursement des\
\ frais ; \n• prestations en espèces (indemnités journalières, rente d’invalidité,\
\ \nindemnité pour atteinte à l’intégrité, allocation pour impotent et rente\
\ \nde survivants).\n24 Qui sont les assureurs ?\nL’assurance facultative est\
\ gérée par les mêmes assureurs que l’assurance \nobligatoire, à savoir par la\
\ Suva et par les assureurs désignés à l’art. 68 \nLAA. \nSi vous employez du\
\ personnel soumis à l’assurance obligatoire, c’est par \nprincipe l’assureur\
\ couvrant le personnel de l’entreprise qui gère également \nl’assurance facultative\
\ pour vous-même et pour les membres de votre fa -\nmille collaborant à l’entreprise.\
\ \nSi vous n’employez pas de personnel et que vous travaillez dans un secteur\
\ \néconomique relevant de la Suva, vous ne pouvez vous assurer à titre facul\
\ -\ntatif qu’auprès de la Suva. Il en est de même pour les membres de votre \n\
famille qui travaillent dans l’entreprise. \nSi vous travaillez dans un secteur\
\ économique qui ne relève pas de la Suva, \nvous pouvez choisir votre assureur\
\ parmi les assureurs désignés à l’art. 68 \nLAA. Ceux-ci n’ont pas l’obligation\
\ d’accepter une demande d’adhésion.\nSi vous accomplissez un service (comme le\
\ service militaire), vous êtes as -\nsuré contre les accidents auprès de l’assurance\
\ militaire (qui est gérée par \nla Suva). \f1125 Quelles sont les cotisations\
\ d’assurance-accidents \ndéductibles des impôts ?\nLes primes que vous versez\
\ à l’assurance-accidents obligatoire en faveur de \nvos employés peuvent être\
\ intégralement déduites du résultat d’exploita -\ntion comme charges justifiées\
\ par l’usage commercial. Les primes que vous \npayez en tant qu’indépendant à\
\ titre facultatif pour votre propre assurance- \naccidents obligatoire ne peuvent\
\ être déduites du résultat d’exploitation en \ntant que charges justifiées par\
\ l’usage commercial qu’à hauteur des primes \nversées pour les autres employés.\
\ Si vous n’employez pas de personnel, les \nprimes que vous versez pour votre\
\ propre assurance sont réparties entre :\n• les frais professionnels que vous\
\ pouvez déduire du résultat d’ex -\nploitation en tant que charges justifiées\
\ par l’usage commercial, et \n• les frais privés que vous pouvez faire valoir\
\ dans les déductions géné -\nrales concernant les assurances.\n26 Quelles sont\
\ les prestations d’assurance-accidents \nimposables ?\nLes prestations de l’assurance-accidents\
\ versées sous forme de rente sont \nadditionnées aux autres revenus imposables\
\ et intégralement imposées à \nce titre. Celles versées sous forme de capital\
\ sont imposées séparément des \nautres revenus et soumises à un impôt annuel\
\ entier, sur la base d’un taux \nréduit. Pour l’impôt fédéral direct, ce taux\
\ correspond à un cinquième des \nbarèmes fiscaux réguliers.\f12Renseignements\
\ et autres \ninformations\nCe mémento ne fournit qu’un aperçu général. Pour\
\ le règlement des \ncas individuels, seules les dispositions légales font foi.\
\ Les caisses de \ncompensation et leurs agences fournissent volontiers tous les\
\ rensei -\ngnements utiles. Vous trouverez la liste complète des caisses de com\
\ -\npensation sur le site www.avs-ai.ch .\nLes désignations d’état civil utilisées\
\ ici ont également les significations \nsuivantes : \n• mariage : partenariat\
\ enregistré ;\n• divorce : dissolution judiciaire du partenariat enregistré\
\ ;\n• décès du conjoint : décès du partenaire enregistré.\nPublié par le Centre\
\ d’information AVS/AI en collaboration avec \nl’Office fédéral des assurances\
\ sociales.\nÉdition novembre 2024. Toute reproduction, même partielle, n’est\
\ \nautorisée qu’avec l’accord écrit du Centre d’information AVS/AI. \nCe mémento\
\ peut être obtenu auprès des caisses de compensation et \nde leurs agences ainsi\
\ qu’auprès des offices AI. Numéro de commande \n2.09/f. Il est également disponible\
\ sous www.avs-ai.ch .\n Plus d’informations, de publications et de vidéos explicatives.\n\
2.09-25/01-F"
- "3.03 Prestations de l’AVS \nRentes de survivants \nde l’AVS\nEtat au 1er janvier\
\ 2025\f2En bref\nLa rente de survivants est là pour empêcher que le décès du\
\ conjoint ou \nd’un des parents ne mette financièrement en difficulté le conjoint\
\ survivant \net les enfants. Il existe trois types de rentes de survivants :\n\
• la rente de veuve,\n• la rente de veuf,\n• la rente d’orphelin.\nVous avez\
\ droit à une rente de survivants seulement si la personne décédée \npouvait justifier\
\ d’au moins une année entière de cotisation.\nCette condition est remplie,\n\
• lorsque la personne décédée totalise une année de cotisation, ou\n• que la\
\ personne décédée était assurée et que son conjoint a payé le \ndouble de la\
\ cotisation minimale pendant un an au moins, ou\n• que la personne décédée pouvait\
\ justifier de bonifications pour tâches \néducatives ou d’assistance.\f3Rente\
\ de veuve\n1 Dans quelles circonstances ai-je droit, étant mariée, \nà une rente\
\ de veuve ?\nSi vous êtes mariée et que votre époux ou votre épouse décède, vous\
\ avez \ndroit à une rente de veuve,\n• si vous avez un ou plusieurs enfants\
\ – leur âge n’est pas déterminant \n– lors du décès de votre conjoint. Sont assimilés\
\ à vos enfants ceux \nde votre conjoint décédé qui font ménage commun avec vous\
\ et qui \ndonnent droit à une rente d’orphelin. Cela vaut aussi pour les enfants\
\ \nrecueillis par vous et votre conjoint, pour autant que vous les adoptiez \n\
après être devenue veuve. L’épouse de la mère est également considé -\nrée comme\
\ une veuve qui a un enfant si elle était mariée avec la mère \nau moment de la\
\ naissance, et que l’enfant a été conçu conformément \naux dispositions de la\
\ loi sur la procréation médicalement assistée et \nqu’il existe par conséquent\
\ un lien de filiation (art. 255a, al. 1, CC), ou\n• si vous avez 45 ans révolus\
\ lors du décès de votre conjoint et que vous \navez été mariée pendant cinq ans\
\ au moins. Si vous avez été mariée \nplusieurs fois, la durée des mariages successifs\
\ sera prise en compte \nlors du calcul de la rente. Pour les couples de même\
\ sexe qui ont \nconverti leur partenariat enregistré en mariage, la durée du\
\ partenariat \nest ajoutée aux années de mariage.\f42 Dans quelles circonstances\
\ ai-je droit, étant divorcée, \nà une rente de veuve ?\nSi vous êtes divorcée\
\ et que votre ex-époux ou ex-épouse décède, vous \navez droit à une rente de\
\ veuve,\n• si vous avez des enfants et que le mariage dissous a duré au moins\
\ dix \nans, ou \n• si vous aviez plus de 45 ans lors du divorce et que le mariage\
\ dissous a \nduré au moins dix ans, ou \n• si le cadet de vos enfants a moins\
\ de 18 ans lorsque vous atteignez \nl’âge de 45 ans. \nSi vous ne remplissez\
\ aucune de ces conditions, vous avez droit à une rente \nde veuve tant que le\
\ cadet de vos enfants n’a pas 18 ans révolus.\nL’épouse divorcée de la mère est\
\ également considérée comme une veuve \navec enfant si elle était mariée avec\
\ la mère au moment de la naissance, \net que l’enfant a été conçu conformément\
\ aux dispositions de la loi sur la \nprocréation médicalement assistée et qu’il\
\ existe par conséquent un lien de \nfiliation (art. 255a, al. 1, CC).\nSi le\
\ mariage avait été établi par la conversion d’un partenariat enregistré, \nen\
\ cas de divorce, la durée du partenariat enregistré est ajoutée aux années \n\
de mariage.\nRente de veuf\n3 Dans quelles circonstances ai-je droit, en tant\
\ \nqu’homme marié ou en tant que partenaire \nenregistré/e, à une rente de veuf\
\ ? \nSi vous êtes marié et que votre épouse ou votre époux décède, vous avez\
\ \ndroit à une rente de veuf si, au moment du veuvage, vous avez un ou plu -\n\
sieurs enfants (quel que soit leur âge). Sont assimilés à vos enfants ceux \n\
de votre conjoint/e décédé/e qui font ménage commun avec vous et qui \ndonnent\
\ droit à une rente d’orphelin. Cela vaut aussi pour les enfants re -\ncueillis\
\ dont vous vous occupiez jusqu’alors avec votre conjoint/e, pour au -\ntant que\
\ vous les adoptiez après votre veuvage.\nSi votre partenaire enregistré/e décède,\
\ vous êtes assimilé/e à un veuf. \f5Dans son arrêt du 11 octobre 2022, la Grande\
\ Chambre de la Cour eu -\nropéenne des droits de l’homme a jugé qu’une inégalité\
\ de traitement \ncontraire à la Convention européenne des droits de l’homme (CEDH)\
\ avait \neu lieu du fait de l’extinction du droit du recourant à une rente de\
\ veuf à \nla majorité de son dernier enfant, extinction qui n’est pas prévue\
\ pour une \nveuve se trouvant dans la même situation.\nLa Suisse est tenue de\
\ se conformer à cet arrêt et de mettre fin à la violation \nconstatée du droit\
\ dès l’entrée en force de l’arrêt le 11 octobre 2022. Les \nbases légales doivent\
\ être adaptées en conséquence en respectant la pro -\ncédure législative. Celle-ci\
\ pouvant être relativement longue, l’adaptation \nne prendra effet qu’ultérieurement.\
\ D’ici là, le régime transitoire en vigueur \ndepuis le 11 octobre 2022 s’applique\
\ aux veufs avec enfants. Leur droit à \nune rente de veuf ne s’éteint plus lorsque\
\ le cadet de leurs enfants atteint \nl’âge de 18 ans, et la rente continue d’être\
\ versée au-delà de cet âge.\nL’arrêt de la CEDH ne s’applique ni aux veufs sans\
\ enfant ni aux hommes \ndivorcés. Les veufs sans enfant continuent à ne pas avoir\
\ droit à une rente \nde veuf sur la base de cet arrêt et, pour les hommes divorcés,\
\ le droit à la \nrente de veuf s’éteint dans tous les cas lorsque le cadet de\
\ leurs enfants \natteint l’âge de 18 ans. L’arrêt de la CEDH ne s’applique pas\
\ non plus aux \ncas dans lesquels la suppression de la rente de veuf décidée\
\ en raison de la \nmajorité du cadet des enfants est entrée en force avant le\
\ 11 octobre 2022.\n4 Dans quelles circonstances ai-je droit, en tant \nqu’homme\
\ divorcé, à une rente de veuf ?\nSi vous êtes divorcé et que votre ex-épouse\
\ décède, vous avez droit à une \nrente de veuf tant que vous avez des enfants\
\ de moins de 18 ans.\f6Rente d’orphelin\n5 Dans quelles circonstances les enfants\
\ bénéficient-ils \nd’une rente d’orphelin ?\nL’AVS accorde une rente d’orphelin\
\ aux enfants dont l’un des parents dé -\ncède. \nSi la mère était mariée à une\
\ femme au moment de la naissance et que \nl’enfant a été conçu conformément aux\
\ dispositions de la loi sur la pro -\ncréation médicalement assistée (art. 255a,\
\ al. 1, CC), l’épouse de la mère \nest considérée comme l’autre parent. Dans\
\ un tel cas, l’enfant a également \ndroit à une rente d’orphelin au décès de\
\ l’épouse de la mère.\nEn cas de décès des deux parents, les enfants ont droit\
\ à deux rentes d’or -\nphelin (une par parent décédé). Ce droit dure jusqu’à\
\ leur 18e anniversaire. \nPour les enfants qui suivent une formation, le droit\
\ s’éteint lorsqu’ils ter -\nminent leur formation, mais au plus tard à leur 25e\
\ anniversaire. Des dispo -\nsitions spéciales sont applicables aux enfants recueillis.\
\ Les enfants dont le \nrevenu annuel brut de l’activité lucrative dépasse 30\
\ 240 francs durant leur \nformation n’ont pas droit à une rente d’orphelin.\n\
Naissance et extinction du droit à la rente\n6 Quand le droit à une rente de survivants\
\ prend-il \nnaissance ?\nLe droit à une rente de survivants prend naissance\
\ le premier jour du mois \nqui suit le décès du conjoint, de l’ex-conjoint ou\
\ de l’un des parents.\n7 Quand le droit à une rente de survivants s’éteint-il\
\ ?\nLe droit à une rente de survivants s’éteint à la fin du mois au cours duquel\
\ \nles conditions ne sont plus remplies. Un remariage met fin au droit à la \n\
rente de veuve ou de veuf. En revanche, les rentes d’orphelin sont main -\ntenues.\f\
7Concours de prestations\n8 Quelle sera la rente versée ?\nSi vous remplissez\
\ simultanément les conditions d’octroi d’une rente de \nsurvivants et celles\
\ d’une rente de vieillesse ou d’invalidité, c’est la rente la \nplus élevée qui\
\ vous sera versée.\nDemande de rente\n9 Où dois-je faire valoir mon droit à une\
\ rente de \nsurvivants ?\nVous devez faire valoir votre droit à une rente de\
\ survivants auprès de la \ndernière caisse de compensation qui a encaissé les\
\ cotisations AVS de la \npersonne décédée. Le formulaire 318.371 – Demande de\
\ rente de survivant \npeut être obtenu auprès des caisses de compensation et\
\ de leurs agences, \nainsi que sur le site www.avs-ai.ch . La demande doit ensuite\
\ être déposée \nauprès de la caisse de compensation compétente.\nSi vous avez\
\ accompli des périodes d’assurance en Suisse ou dans un ou \nplusieurs États\
\ membres de l’UE ou de l’AELE, la présentation d’une seule \ndemande dans votre\
\ pays de domicile suffit. La présentation d’une seule \ndemande dans le pays\
\ de domicile entraînera la procédure d’annonce dans \ntous les pays concernés.\n\
Si la personne décédée n’a pas cotisé à l’AVS, vous devez adresser votre \ndemande\
\ de rente de survivants à la caisse cantonale de compensation ou \nà son agence.\n\
Si vous résidez à l’étranger, veuillez consulter la page « Demander une \nrente\
\ de survivant » sur le site Internet de la Caisse suisse de compensation \nCSC\
\ : www.cdc.admin.ch\f8Calcul des rentes de survivants\n10 Comment les rentes\
\ de survivants sont-elles calculées ?\nLes éléments du calcul de la rente sont\
\ les suivants : \n• les années de cotisation qui peuvent être prises en considération,\n\
• les revenus de l’activité lucrative, et\n• les bonifications pour tâches éducatives\
\ et pour tâches d’assistance de \nla personne décédée.\nDans le cas du décès\
\ de l’épouse, ex-épouse ou mère, la durée de cotisation \ndoit être déterminée\
\ pour le calcul de la rente de veuf et des rentes d’or -\nphelin : les années\
\ de mariage antérieures au 31 décembre 1996 (exemptes \nde cotisations) sont\
\ comptées comme années de cotisation.\n11 Vais-je toucher une rente complète\
\ ?\nVous toucherez une rente complète (échelle de rentes 44) si la personne \n\
décédée a présenté une durée de cotisation complète du 1er janvier qui suit \n\
son 20e anniversaire à son décès.\n12 Vais-je toucher une rente partielle ?\n\
Vous toucherez une rente partielle (échelle de rentes 1-43) si la personne \n\
décédée présentait une durée de cotisation incomplète. Cette rente est \ncalculée\
\ selon le rapport entre le nombre effectif d’années où la personne \ndécédée\
\ a cotisé et la durée de cotisation complète.\n13 Les années de jeunesse sont-elles\
\ prises en compte ?\nLes années de jeunesse sont des périodes de cotisation accomplies\
\ entre 18 \net 20 ans. Les périodes de cotisation accomplies par la personne\
\ décédée \npendant lesdites années peuvent être prises en compte afin de combler\
\ \nd’éventuelles lacunes de cotisation.\n14 Les périodes de cotisations accomplies\
\ après l’âge de \nréférence sont-elles prises en compte ?\nSi la personne décédée\
\ avait continué d’exercer une activité lucrative après \nl’âge de référence,\
\ il est possible, sous certaines conditions, de prendre en \ncompte ces périodes\
\ de cotisation pour combler les lacunes ou d’augmen -\nter la rente grâce aux\
\ revenus supplémentaires. Un nouveau calcul de la \nrente ne peut être effectué\
\ qu’une seule fois après l’âge de référence. \f9Si la personne décédée n’avait\
\ pas demandé de nouveau calcul, les survi -\nvants peuvent le faire pour leur\
\ rente de survivants. \nNous attirons votre attention sur les informations complémentaires\
\ figu -\nrant dans le mémento 3.08 - Nouveau calcul de la rente de vieillesse\
\ après \nl’âge de référence .\n15 Quelle est la composition du revenu annuel\
\ moyen ?\nLe revenu annuel moyen se compose \n• de la moyenne des revenus de\
\ l’activité lucrative,\n• de la moyenne des bonifications pour tâches éducatives,\
\ et \n• de la moyenne des bonifications pour tâches d’assistance.\nMoyenne des\
\ revenus de l’activité lucrative\n16 Comment la moyenne des revenus de l’activité\
\ lucrative \nest-elle calculée ?\nLes rentes de survivants sont calculées sur\
\ la base des revenus de l’activité \nlucrative de la personne décédée.\nPour\
\ calculer la moyenne des revenus de l’activité lucrative, on additionne \ntous\
\ les revenus issus d’une activité lucrative réalisés jusqu’au 31 décembre \n\
de l’année précédant l’ouverture du droit à la rente. Les revenus des années \n\
de jeunesse sont pris en compte uniquement pour combler les lacunes de \ncotisation\
\ ultérieures. \nLes revenus de l’activité lucrative des personnes sont inscrits\
\ sur ce que l’on \nappelle leur compte individuel (CI).\n17 La somme des revenus\
\ est-elle adaptée à l’évolution des \nsalaires et des prix ?\nLes revenus peuvent\
\ dater d’années où les salaires se situaient à un niveau \nplus bas. C’est pourquoi\
\ la somme des revenus est revalorisée selon l’évolu -\ntion moyenne des salaires\
\ et des prix. La somme revalorisée est divisée par \nle nombre d’années et de\
\ mois qui peuvent être pris en compte. Le résultat \ncorrespond à la moyenne\
\ des revenus de l’activité lucrative.\f1018 Qu’est-ce que le supplément de carrière\
\ ?\nLorsque la personne décède avant 45 ans, la moyenne des revenus de l’ac -\n\
tivité lucrative est augmentée d’un supplément exprimé en pourcentage et \ncalculé\
\ en fonction de son âge (supplément de carrière).\nEn cas de décès Pourcentage\n\
de l’âge de \n… ansà l’âge de \n...ans\n23 100\n23 24 90\n24 25 80\n25 26 70\n\
26 27 60\n27 28 50\n28 30 40\n30 32 30\n32 35 20\n35 39 10\n39 45 5\nMoyenne des\
\ bonifications pour tâches éducatives et \npour tâches d’assistance\n19 Qu’est-ce\
\ que les bonifications pour tâches éducatives ?\nLa personne décédée peut être\
\ gratifiée de bonifications pour tâches édu -\ncatives pour les années durant\
\ lesquelles elle s’est occupée d’enfants de \nmoins de 16 ans. La bonification\
\ correspond au triple de la rente minimale \nannuelle. Dans le cas de personnes\
\ mariées, la bonification est partagée \npar moitié pour les années civiles de\
\ mariage. Ce partage ne concerne tou -\ntefois que les bonifications pour la\
\ période comprise entre le 1er janvier qui \nsuit le 20e anniversaire et le 31\
\ décembre qui précède l’accomplissement \nde l’âge de référence du conjoint le\
\ plus âgé. La moyenne des bonifications \npour tâches éducatives s’obtient en\
\ divisant la somme des bonifications par \nla durée de cotisation complète. \n\
Si les parents sont divorcés ou non mariés, et qu’ils exercent conjointement \n\
l’autorité parentale, la bonification pour tâches éducatives entière est at -\n\
tribuée à l’un d’entre eux ou par moitié à chacun d’eux, en fonction de la \n\
décision du tribunal ou de l’autorité de protection de l’enfant et de l’adulte\
\ \n(APEA), ou sur la base de l’accord passé entre les parents. Nous attirons\
\ \nvotre attention sur les informations complémentaires figurant dans le mé -\n\
mento 1.07 – Bonifications pour tâches éducatives.\f1120 Qu’est-ce que les bonifications\
\ pour tâches \nd’assistance ?\nLa personne décédée peut être gratifiée de bonifications\
\ pour tâches d’as -\nsistance pour les années pendant lesquelles elle s’est occupée\
\ de parent \nnécessitant des soins, qui habitaient à proximité et touchaient\
\ une alloca -\ntion pour impotence. Est assimilé/e aux parents le/la partenaire\
\ avec qui \nl’assuré/e fait ménage commun depuis au moins cinq ans. Elle n’en\
\ est \ncependant pas gratifiée pour les années pour lesquelles elle avait déjà\
\ droit \nà des bonifications pour tâches éducatives. La bonification correspond\
\ au \ntriple de la rente minimale annuelle. Dans le cas de personnes mariées,\
\ la \nbonification est partagée par moitié pour les années civiles de mariage.\
\ Ce \npartage ne concerne toutefois que les bonifications pour la période com\
\ -\nprise entre le 1er janvier qui suit le 20e anniversaire et le 31 décembre\
\ qui \nprécède l’accomplissement de l’âge de référence du conjoint le plus âgé.\
\ La \nmoyenne des bonifications pour tâches d’assistance s’obtient en divisant\
\ la \nsomme des bonifications par la durée complète de cotisation. \nLa demande\
\ d’attribution de bonifications pour tâches d’assistance doit \nêtre déposée\
\ chaque année pour l’année précédente, au moyen du formu -\nlaire 318.270 - Demande\
\ de bonifications pour tâches d’assistance , auprès \nde la caisse cantonale\
\ de compensation du domicile de la personne assis -\ntée.\nNous attirons votre\
\ attention sur les informations complémentaires figu -\nrant dans le mémento\
\ 1.03 - Bonifications pour tâches d’assistance .\f12Montant des rentes\n21 Quel\
\ est le montant des rentes à l’heure actuelle ?\nSi la durée de cotisation était\
\ complète, les survivants ont droit à une rente \nordinaire complète qui dépend\
\ du revenu moyen de la personne décédée :\nminimale maximale\nCHF/mois CHF/mois\n\
Rente de veuve ou de veuf 1 008.– 2 016.–\nRente d’orphelin 504.– 1 008.–\n\
Lorsqu’un enfant donne droit à deux rentes d’orphelin ou à une rente d’or -\n\
phelin et une rente pour enfant, la somme des deux ne doit pas dépasser \n1 512\
\ francs (60 % de la rente maximale de vieillesse).\nPrestations complémentaires\n\
22 Dans quelles circonstances ai-je droit à des prestations \ncomplémentaires\
\ ?\nSi vous êtes veuve, veuf ou orphelin et que votre situation économique est\
\ \nmodeste, vous avez droit à des prestations complémentaires, à certaines \n\
conditions. Nous attirons votre attention sur les informations complémen -\ntaires\
\ figurant dans les mémentos 5.01 – Prestations complémentaires à \nl’AVS et\
\ à l’AI et 5.02 – Votre droit aux prestations complémentaires à l’AVS \net à\
\ l’AI .\nSi vous résidez à l’étranger, vous n’avez pas droit aux prestations\
\ complé -\nmentaires.\f13Exemple de calcul\n23 Décès du mari et père\nUn homme\
\ né en juin 1975 décède en mars 2025. Il laisse une épouse et \ndeux enfants,\
\ nés en 2007 et 2008. Des bonifications pour tâches éduca -\ntives peuvent donc\
\ lui être imputées pour une durée de 17 ans. Une rente \nde veuve et deux rentes\
\ d’orphelin sont versées à partir du 1er avril 2025. Le \ndéfunt a cotisé à l’AVS\
\ sans interruption depuis 1996 jusqu’à son décès, ce \nqui ouvre le droit à des\
\ rentes complètes de survivants ( échelle de rentes 44 ).\nLa moyenne des revenus\
\ de l’activité lucrative est calculée sur la \nbase des comptes individuels comme\
\ suit :\nSomme des revenus réalisés pendant 29 \nannées de cotisation, de 1996\
\ à 2024 CHF 1 600 000.–\nCette somme de revenus divisée par la durée de \ncotisation\
\ déterminante (29 années) donne une \nmoyenne des revenus provenant \nde l’activité\
\ lucrative de CHF 55 172.–\nLa moyenne des bonifications pour tâches éducatives\
\ est \ncalculée comme suit :\nNombre d’années x triple de la rente annuelle\
\ \nminimale ÷ durée de cotisation ÷ deux ans \n17 x 45 360 francs ÷ 29 années\
\ ÷ 2 CHF 13 295.–\nCalcul du revenu annuel moyen et des rentes :\nMoyenne des\
\ revenus provenant de l’activité lucrative CHF 55 172.–\nMoyenne des bonifications\
\ pour tâches éducatives CHF 13 295.–\nRevenu annuel moyen \n(arrondi à la valeur\
\ des tables en annexe, \nEchelle 44 : Rentes complètes mensuelles ) de CHF 69\
\ 552.–\nSelon la table figurant en annexe \nles rentes s’élèvent à : \nune\
\ rente de veuve CHF 1 790.–\ndeux rentes d’orphelin, chacune de CHF 895.–\nAnnexes\n\
• Table des rentes complètes (échelle 44)\n• Table des facteurs de revalorisation\f\
14Rentes AVS/AI à partir du 1er janvier 2025\nEchelle 44 : Rentes complètes mensuelles\
\ Montants en francs\nBase de calcul Rente de \nvieillesse et \nd’invaliditéRente\
\ de \nvieillesse et \nd’invalidité \npour \nveuves/veufs Rentes de survivants\
\ et rentes complémentaires\nRevenu annuel \nmoyen \ndéterminantVeuves/ \nveufsRente\
\ \ncomplé -\nmentaireRente \nd’orphelin ou \npour enfantRente \nd’orphelin \n\
60 %*\n1/1 1/1 1/1 1/1\n jusqu’à 15 120 1 260 1 512 1 008 378 504 756\n16\
\ 632 1 293 1 551 1 034 388 517 776\n18 144 1 326 1 591 1 060 398 530 \
\ 795\n19 656 1 358 1 630 1 087 407 543 815\n21 168 1 391 1 669 1 113 417\
\ 556 835\n22 680 1 424 1 709 1 139 427 570 854\n24 192 1 457 1 748 1\
\ 165 437 583 874\n25 704 1 489 1 787 1 191 447 596 894\n27 216 1 522\
\ 1 826 1 218 457 609 913\n28 728 1 555 1 866 1 244 466 622 933\n30 240\
\ 1 588 1 905 1 270 476 635 953\n31 752 1 620 1 944 1 296 486 648 972\n\
33 264 1 653 1 984 1 322 496 661 992\n34 776 1 686 2 023 1 349 506 674 1\
\ 011\n36 288 1 719 2 062 1 375 516 687 1 031\n37 800 1 751 2 102 1 401 525\
\ 701 1 051\n39 312 1 784 2 141 1 427 535 714 1 070\n40 824 1 817 2 180 1\
\ 454 545 727 1 090\n42 336 1 850 2 220 1 480 555 740 1 110\n43 848 1 882\
\ 2 259 1 506 565 753 1 129\n45 360 1 915 2 298 1 532 575 766 1 149\n46 872\
\ 1 935 2 322 1 548 581 774 1 161\n48 384 1 956 2 347 1 564 587 782 1 173\n\
49 896 1 976 2 371 1 580 593 790 1 185\n51 408 1 996 2 395 1 597 599 798 1\
\ 197\n52 920 2 016 2 419 1 613 605 806 1 210\n54 432 2 036 2 443 1 629 611\
\ 814 1 222\n55 944 2 056 2 468 1 645 617 823 1 234\n57 456 2 076 2 492 1\
\ 661 623 831 1 246\n58 968 2 097 2 516 1 677 629 839 1 258\n60 480 2 117\
\ 2 520 1 693 635 847 1 270\n61 992 2 137 2 520 1 710 641 855 1 282\n63 504\
\ 2 157 2 520 1 726 647 863 1 294\n65 016 2 177 2 520 1 742 653 871 1 306\n\
66 528 2 197 2 520 1 758 659 879 1 318\n68 040 2 218 2 520 1 774 665 887 1\
\ 331\n69 552 2 238 2 520 1 790 671 895 1 343\n71 064 2 258 2 520 1 806 677\
\ 903 1 355\n72 576 2 278 2 520 1 822 683 911 1 367\n74 088 2 298 2 520 1\
\ 839 689 919 1 379\n75 600 2 318 2 520 1 855 696 927 1 391\n77 112 2 339\
\ 2 520 1 871 702 935 1 403\n78 624 2 359 2 520 1 887 708 943 1 415\n80 136\
\ 2 379 25 20 1 903 714 952 1 427\n81 648 2 399 2 520 1 919 720 960 1 439\n\
83 160 2 419 2 520 1 935 726 968 1 452\n84 672 2 439 2 520 1 951 732 976 1\
\ 464\n86 184 2 460 2 520 1 968 738 984 1 476\n87 696 2 480 2 520 1 984 744\
\ 992 1 488\n89 208 2 500 2 520 2 000 750 1 000 1 500\n 90 720 et\
\ plus 2 520 2 520 2 016 756 1 008 1 512\n* Montants également applicables aux\
\ rentes d’orphelins doubles et aux rentes entières doubles pour enfants.\f15Facteurs\
\ forfaitaires de revalorisation en fonction de l’entrée \ndans l’assurance :\
\ survenance du cas d’assurance en 2025\nPremière in- \nscription au CI*Facteur\
\ de \nrevalorisationPremière in- \nscription au CI*Facteur de \nrevalorisation\n\
1976 1,110 2001 1,000\n1977 1,098 2002 1,000\n1978 1,086 2003 1,000\n1979 1,075\
\ 2004 1,000\n1980 1, 063 2005 1,000\n1981 1,052 2006 1,000\n1982 1,042 2007 1,000\n\
1983 1,032 2008 1,000\n1984 1,022 2009 1,000\n1985 1,013 2010 1,000\n1986 1,004\
\ 2011 1,000\n1987 1,000 2012 1,000\n1988 1,000 2013 1,000\n1989 1,000 2014 1,000\n\
1990 1,000 2015 1,000\n1991 1,000 2016 1,000\n1992 1,000 2017 1,000\n1993 1,000\
\ 2018 1,000\n1994 1,000 2019 1,000\n1995 1,000 2020 1,000\n1996 1,000 2021 1,000\n\
1997 1,000 2022 1,000\n1998 1,000 2023 1,000\n1999 1,000 2024 1,000\n2000 1,000\n\
* La première inscription au CI déterminante pour le calcul de la rente ne peut\
\ pas \nêtre antérieure à l’année civile au cours de laquelle la personne a atteint\
\ l’âge de \n21 ans.\f16Renseignements et autres \ninformations\nCe mémento ne\
\ fournit qu’un aperçu général. Pour le règlement des \ncas individuels, seules\
\ les dispositions légales font foi. Les caisses de \ncompensation, leurs agences\
\ et les offices AI fournissent volontiers \ntous les renseignements utiles. Vous\
\ trouverez la liste complète des \ncaisses de compensation sur le site www.avs-ai.ch\
\ .\nLes désignations d’état civil utilisées ici ont également les significations\
\ \nsuivantes : \n• mariage : partenariat enregistré ;\n• divorce : dissolution\
\ judiciaire du partenariat enregistré ;\n• décès du conjoint : décès du partenaire\
\ enregistré.\nPublié par le Centre d’information AVS/AI en collaboration avec\
\ \nl’Office fédéral des assurances sociales.\nEdition novembre 2024. Toute reproduction,\
\ même partielle, n’est \nautorisée qu’avec l’accord écrit du Centre d’information\
\ AVS/AI. \nCe mémento peut être obtenu auprès des caisses de compensation et\
\ \nde leurs agences ainsi qu’auprès des offices AI. Numéro de commande \n3.03/f.\
\ Il est également disponible sous www.avs-ai.ch .\n Plus d’informations, de publications\
\ et de vidéos explicatives.\n3.03-25/01-F"
- " enfant gravement atteint dans sa santé ou \nd’adoption (APG)\n• Prestations\
\ complémentaires (PC)\n• Prestations transitoires pour chômeurs âgés (Ptra).\n\
2 Suis-je assuré auprès de l’AVS/AI ?\nL’AVS et l’AI sont des assurances générales\
\ et obligatoires. Y sont assurées \nles personnes qui sont domiciliées ou qui\
\ exercent une activité lucrative en \nSuisse. L’obligation légale de s’assurer\
\ vaut également pour les ressortis -\nsants étrangers. \n3 Comment puis-je connaître\
\ mon numéro AVS ?\nLe numéro AVS figure sur la carte d’assurance-maladie ainsi\
\ que sur le \ncertificat d’assurance AVS/AI. Un assuré qui ne possède pas de\
\ carte \nd’assurance-maladie ni de certificat d’assurance AVS/AI peut s’adresser\
\ \nà sa caisse de compensation afin qu’elle lui délivre ledit certificat d’as\
\ -\nsurance. Il est alors tenu de le conserver. Les assurés présentent leur carte\
\ \nd’assurance-maladie ou leur certificat d’assurance AVS/AI à leur nouvel \
\ \nemployeur lors de chaque changement d’emploi et à l’organe compétent \nen\
\ cas de demande de prestations. Important : le numéro AVS doit être \nindiqué\
\ dans toute correspondance avec les caisses de compensation.\f21Cotisations\n\
4 A quel moment débute l’obligation de cotiser \nauprès de l’AVS/AI ?\nLes personnes\
\ non actives paient des cotisations dès le 1er janvier qui suit \nleur 20e anniversaire\
\ et jusqu’à l’âge de référence. Les personnes actives \nont l’obligation de cotiser\
\ dès le début de leur activité lucrative, mais pas \navant le 1er janvier qui\
\ suit leur 17e anniversaire.\n5 De quelle manière les cotisations à l’AVS, à\
\ l’AI \net au régime des APG sont-elles perçues ?\nLes cotisations dues à l’AVS,\
\ à l’AI et aux APG sont perçues de la manière \nsuivante :\n• Salariés :\n La\
\ cotisation paritaire, définie en pourcentage du salaire, est payée \npour moitié\
\ par le salarié (prélevée sur le salaire) et pour moitié par \nl’employeur. S’y\
\ ajoutent la cotisation due à l’assurance-chômage (AC)\net, le cas échéant, les\
\ cotisations relatives à d’autres branches d’as -\nsurances sociales. Les cotisations\
\ du salarié sont retenues lors de cha -\nque paie par l’employeur, qui les verse\
\ à la caisse de compensation en \nmême temps que sa propre part de cotisations\
\ (voir aussi mémentos \n2.01 - Cotisations salariales à l’AVS, à l’AI et aux\
\ APG et 2.08 - Cotisa -\ntions à l’assurance-chômage ).\n• Indépendants :\n\
\ La personne qui exerce une activité indépendante établit le décompte \nde ses\
\ cotisations directement avec la caisse de compensation. Ses \ncotisations sont\
\ définies en pourcentage du revenu mentionné sur la \ntaxation de l’impôt fédéral\
\ direct et communiqué à la caisse de com -\npensation par l’autorité fiscale.\
\ S’y ajoutent, le cas échéant, les cotisa -\ntions relatives à d’autres branches\
\ d’assurances sociales. Il appartient à \nla caisse de compensation de déterminer\
\ si l’assuré est indépendant au \nsens de l’AVS (voir aussi mémento 2.02 - Cotisations\
\ des indépendants \nà l’AVS, à l’AI et aux APG ).\n• Non-actifs :\n Le montant\
\ des cotisations d’une personne sans activité lucrative \ndépend de la fortune\
\ et du revenu acquis sous forme de rente. Il est \nfixé par la caisse de compensation\
\ du canton de domicile de l’assuré. \nSi vous avez dépassé l’âge de 58 ans, le\
\ montant des cotisations sera \ndéterminé par la caisse de compensation auprès\
\ de laquelle vous \f22 avez, en dernier lieu, payé des cotisations comme personne\
\ active (voir \naussi mémento 2.03 - Cotisations des personnes sans activité\
\ lucrative \nà l’AVS, à l’AI et aux APG ).\nPrestations de l’AVS\n6 Quelles prestations\
\ l’AVS octroie-t-elle ? \nL’AVS octroie les prestations suivantes :\n• rente\
\ de vieillesse pour les hommes et les femmes qui ont atteint l’âge \nde référence.\
\ L’âge de référence est fixé à 65 ans pour les hommes \net est relevé progressivement\
\ de 64 à 65 ans pour les femmes dès \n2025. A partir de 2028, l’âge de référence\
\ sera ainsi le même pour \ntout le monde, soit 65 ans. (voir mémento 3.01 - Rentes\
\ de vieil -\nlesse et allocations pour impotents de l’AVS ). La rente de vieillesse\
\ \npeut être anticipée ou ajournée. Il y a réduction de la rente en cas \nd’anticipation\
\ et augmentation en cas d’ajournement (voir mémento \n3.04 - Flexibilisation\
\ de la retraite ).\n• rente pour enfant versée aux bénéficiaires de rente de\
\ vieillesse. Ce \ndroit vaut jusqu’au 18e anniversaire de leurs enfants ou jusqu’à\
\ la fin \nd’un apprentissage ou des études mais, au maximum, jusqu’à leur \n\
25e anniversaire.\n• rente de veuve ou de veuf.\n• rente d’orphelin pour les\
\ enfants de moins de 18 ans ou jusqu’à la fin \nd’un apprentissage ou des études\
\ mais, au maximum, jusqu’à leur 25e \nanniversaire.\n• allocation pour impotent\
\ selon le chiffre 8.\n• moyens auxiliaires selon le chiffre 9.\n7 Quelles conditions\
\ ouvrent le droit \naux prestations de l’AVS ? \nLes ressortissants d’un pays\
\ avec lequel la Suisse n’a pas conclu de conven -\ntion de sécurité sociale (Etats\
\ non contractants), ainsi que leurs survivants \n(veuves, veufs, orphelins),\
\ ont droit à une rente de l’AVS\n• s’ils sont domiciliés en Suisse, et\n• s’ils\
\ ont cotisé à l’AVS pendant une année entière au moins, ou\n• s’ils ont été\
\ couverts pendant une année entière au moins par \nl’assurance d’un conjoint\
\ ayant travaillé et versé au moins le double de \nla cotisation minimale, ou\f\
23• s’ils justifient d’au moins une année entière de bonifications pour \ntâches\
\ éducatives ou pour tâches d’assistance.\nUne rente de survivants n’est octroyée\
\ que si la personne décédée a été \nassurée pendant une année entière au moins.\n\
8 Quelles conditions ouvrent le droit \nà une allocation pour impotent ?\nLes\
\ bénéficiaires d’une rente de vieillesse ou de prestations complémen -\ntaires,\
\ domiciliés et résidant habituellement en Suisse, ont droit à une \nallocation\
\ pour impotent lorsqu’ils présentent une impotence faible, mo -\nyenne ou grave\
\ depuis six mois au moins. Est considérée comme impo -\ntente la personne qui\
\ a besoin d’une aide régulière d’autrui pour les actes \nordinaires de la vie\
\ (s’habiller, manger, faire sa toilette, etc.) et de soins \nen permanence, voire\
\ d’une surveillance personnelle. Les allocations pour \nimpotent ne sont pas\
\ versées à l’étranger.\n9 Quelles conditions ouvrent le droit \nà des moyens\
\ auxiliaires de l’AVS ?\nLes bénéficiaires de rentes de vieillesse domiciliés\
\ en Suisse reçoivent, à \ncertaines conditions, des moyens auxiliaires de l’AVS\
\ (appareils acous -\ntiques, lunettes-loupes, prothèses, fauteuils roulants,\
\ etc. voir mémento \n3.02 - Moyens auxiliaires de l’AVS ).\nPrestations de l‘AI\n\
10 Quelles prestations l’AI octroie-t-elle ?\nL’AI accorde d’abord des mesures\
\ de réadaptation. La rente AI n’est versée \nque si les mesures de réadaptation\
\ n’ont pas atteint leur but ou ne l’ont \natteint qu’en partie, ou encore si\
\ elles sont d’emblée vouées à l’échec.\nL’AI fournit les prestations suivantes\
\ :\n• mesures d’intervention précoce :\n celles-ci ont pour but de maintenir\
\ à leur poste les assurés en inca -\npacité de travail, de permettre leur réadaptation\
\ à un nouveau poste \nou faciliter l’accès aux jeunes assurés à une formation\
\ professionnelle \ninitiale.\n• mesures de réadaptation :\n celles-ci sont destinées\
\ à améliorer la capacité de gain actuelle ou \nfuture (par ex. reclassement\
\ professionnel, moyens auxiliaires).\f24• rente d’invalidité :\n celle-ci vise\
\ à compenser les conséquences économiques durables de \nl’invalidité en couvrant\
\ les besoins vitaux dans une mesure appropriée. \nElle est accordée au plus tôt\
\ dès l’âge de 18 ans. Le taux d’invalidité \ndétermine la rente que touchera\
\ la personne assurée.\n• rente pour enfant :\n celle-ci est versée en cas d’invalidité\
\ d’un des parents ; l’enfant y donne \ndroit jusqu’à son 18e anniversaire ou\
\ jusqu’à la fin d’un apprentissage \nou des études mais, au maximum, jusqu’à\
\ son 25e anniversaire.\n• allocation pour impotent et contribution d’assistance\
\ selon le chiffre 14.\n11 Quelles conditions ouvrent le droit \nà des mesures\
\ de réadaptation ?\nLes ressortissants d’Etats non contractants domiciliés en\
\ Suisse ont droit \naux mesures de réadaptation si, avant la survenance de l’invalidité,\n\
• ils ont cotisé pendant une année entière au moins, ou\n• ils ont vécu au moins\
\ une année entière en Suisse avec un conjoint \nayant travaillé et versé au moins\
\ le double de la cotisation minimale, ou\n• ils justifient d’un an de bonifications\
\ pour tâches éducatives ou pour \ntâches d’assistance, ou\n• ils ont séjourné\
\ en Suisse pendant une durée ininterrompue de dix ans.\n12 Les enfants ont-ils\
\ également droit \nà des mesures de réadaptation ?\nLes enfants de moins de\
\ 20 ans ont également droit à des mesures de réad -\naptation lorsqu’un seul\
\ des parents remplit les conditions ci-dessus, ou s’ils \nsont eux-mêmes nés\
\ invalides en Suisse ou y ont vécu au moins un an avant \nla survenance de l’invalidité\
\ ou sans interruption depuis leur naissance. Les \nenfants nés à l’étranger ont\
\ exceptionnellement droit à des mesures de \nréadaptation si leur mère a résidé\
\ à l’étranger pendant deux mois au plus \nimmédiatement avant la naissance.\n\
13 Quelles conditions ouvrent le droit à une rente de l’AI ?\nPour avoir droit\
\ à une rente AI ordinaire, la personne assurée doit avoir \ncotisé à l’assurance\
\ suisse pendant trois années entières au moins avant la \nsurvenance de l’invalidité\
\ et être domiciliée en Suisse.\f2514 Quelles conditions ouvrent le droit à une\
\ allocation \npour impotent ou à une contribution d‘assistance ?\nLes assurés\
\ qui remplissent les conditions décrites au chiffre 11 ont droit à \nl’allocation\
\ pour impotent s’ils résident en Suisse et si, en raison de leur in -\nvalidité,\
\ ils ont besoin d’une aide régulière d’autrui pour les actes ordinaires \nde\
\ la vie (s’habiller, manger, faire sa toilette, etc.), voire d’une surveillance\
\ \npersonnelle. De plus, les bénéficiaires d’une allocation pour impotent vi\
\ -\nvant à la maison peuvent, grâce à la contribution d’assistance, engager un\
\ \nassistant qui leur apporte l’aide dont ils ont besoin. Si l’impotence subsiste\
\ \nlorsque la rente de vieillesse se substitue à la rente d’invalidité, la personne\
\ \nassurée continuera de toucher une allocation pour impotent et une contri -\n\
bution d’assistance au moins égales à celles reçues jusque-là. \nL’allocation\
\ pour impotent et la contribution d’assistance ne sont pas ver -\nsées à l’étranger.\
\ \nPrestations complémentaires\n15 Ma rente AVS ou AI ne couvre pas le coût de\
\ la vie, que \npuis-je faire ?\nSi vous percevez une prestation en espèces (rente\
\ AVS ou AI, une allocation \npour impotent de l‘AI après l‘âge de 18 ans ou une\
\ indemnité journalière \nde l‘AI pendant au moins six mois) et que vous vivez\
\ dans des conditions \néconomiques modestes, vous avez droit, sous certaines\
\ conditions, à des \nprestations complémentaires. \nSi vous avez atteint l‘âge\
\ de référence, que vous êtes invalide, veuf, veuve \nou orphelin et que vous\
\ n‘avez néanmoins pas droit à une rente parce que \nvous n‘avez pas cotisé ou\
\ que vous n‘avez pas cotisé assez longtemps, vous \npouvez tout de même faire\
\ valoir un droit aux prestations complémentaires \nsous certaines conditions.\
\ \nSi vous n‘habitez pas en Suisse, vous n‘avez pas droit aux prestations com\
\ -\nplémentaires.\nVous trouverez de plus amples informations à ce sujet dans\
\ le mémento \n5.01 - Prestations complémentaires à l‘AVS et à l‘AI et dans le\
\ mémento \n5.02 - Votre droit aux prestations complémentaires à l‘AVS et à l‘AI\
\ .\f26Calcul des rentes\n16 Comment les rentes de l’AVS et de l’AI se calculent-elles\
\ ?\nLes rentes AVS et AI sont calculées sur la base de la durée des cotisations,\
\ du \nrevenu de l’activité lucrative ainsi que des bonifications pour tâches\
\ éduca -\ntives ou d’assistance. Les cotisations versées à une assurance étrangère\
\ et \nles périodes de cotisations correspondantes ne peuvent être ni transférées\
\ \nà l’AVS/AI suisse ni prises en considération d’aucune autre façon.\nTransfert\
\ des cotisations\n17 Mes cotisations AVS peuvent-elles être transférées \nà\
\ l’assurance sociale de mon pays d’origine ?\nLes cotisations versées à l’AVS/AI\
\ suisse ne peuvent pas être transférées à \nl’assurance du pays d’origine du\
\ ressortissant étranger.\nRemboursement des cotisations\n18 Sous quelles conditions\
\ les cotisations AVS \npeuvent-elles être remboursées ?\nLes ressortissants\
\ d’Etats non contractants domiciliés à l’étranger peuvent, \nà certaines conditions,\
\ demander et obtenir le remboursement sans in -\ntérêts des cotisations AVS après\
\ avoir quitté définitivement la Suisse. Ils \ndoivent pour cela, notamment, avoir\
\ cotisé à l’assurance suisse pendant \nune année entière au moins.\nDemandes\
\ de prestations\n19 A qui les demandes de prestations doivent-elles \nêtre adressées\
\ ?\nLa demande de prestations doit être adressée aux organismes suivants, qui\
\ \ndisposent également des formulaires nécessaires.\nSi l’assuré est domicilié\
\ en Suisse :\n• pour les prestations de l’AVS à la caisse de compensation à\
\ laquelle les \ncotisations ont été versées en dernier lieu\n• pour les prestations\
\ de l’AI à l’office AI du canton de domicile\nIl est recommandé à l’assuré de\
\ présenter sa demande de prestations dès \nque les conditions de droit sont remplies.\
\ \f27Si l’assuré est domicilié à l’étranger : \npour le remboursement des cotisations\
\ selon le chiffre 18, à la \nCaisse suisse de compensation \nAvenue Edmond-Vaucher\
\ 18 \nCase postale 3100 \nCH-1211 Genève 2\n20 A qui les demandes de prestations\
\ d’une assurance \nd’un pays étranger doivent-elles être adressées ? \nLes ressortissants\
\ d’Etats non contractants domiciliés en Suisse qui enten -\ndent faire valoir\
\ le droit aux prestations d’une assurance étrangère s’adres -\nseront directement\
\ à l’institution d’assurance compétente ou à une repré -\nsentation du pays concerné\
\ en Suisse.\n21 Où puis-je obtenir des renseignements et des \ninformations\
\ supplémentaires ?\nLes caisses de compensation, leurs agences et les offices\
\ AI fournissent \ntout renseignement utile. La liste complète des caisses de\
\ compensation \nfigure sur Internet à l’adresse www.avs-ai.ch . \nLes ressortissants\
\ d’Etats non contractants domiciliés à l’étranger s’adres -\nseront à la \nCaisse\
\ suisse de compensation \nAvenue Edmond-Vaucher 18 \nCase postale 3100 \n\
CH-1211 Genève 2\nD’autres informations figurent dans la brochure La sécurité\
\ sociale en \nSuisse. Cette brochure est disponible à l’adresse Internet www.avs-ai.ch\
\ .\nPrévoyance professionnelle (2e pilier) \n22 Prévoyance professionnelle (2e\
\ pilier) : \nquels droits, quelles obligations ?\nLa prévoyance professionnelle\
\ (caisse de pensions) vient en complément de \nl’AVS/AI/APG pour permettre aux\
\ assurés ou aux survivants de maintenir \nde manière appropriée leur niveau de\
\ vie antérieur lorsque survient un cas \nde prévoyance (vieillesse, décès ou\
\ invalidité). Elle est obligatoire pour les \npersonnes actives dont le revenu\
\ annuel est supérieur à 22 680 francs.\f28Les personnes ayant été assurées dans\
\ la prévoyance professionnelle ont \nles droits suivants :\n• une rente de vieillesse\
\ lorsqu’elles atteignent l’âge de référence ou plus \ntôt suivant le règle ment\
\ de la caisse de pension;\n• une rente d’invalidité, si la personne est invalide\
\ à 40 % au moins et si \nelle était assurée au moment où la cause de l’invalidité\
\ s’est produite \n(les règlements des institutions de prévoyance peuvent prévoir\
\ des dis -\npositions plus favorables);\n• des prestations destinées aux survivants\
\ (veuve, veuf et enfants) en cas \nde décès de la personne assurée;\n• une prestation\
\ de sortie (= prestation de libre passage), si aucun des \ntrois événements précités\
\ n’est survenu lorsque l’assuré quitte la caisse \nde pensions.\nLes rentes de\
\ la prévoyance professionnelle sont versées aussi à l’étranger.\n23 Dans quel\
\ cas ai-je droit à une prestation de sortie \nde la caisse de pensions ?\nEn\
\ règle générale, la prestation de sortie doit être transférée sur un compte \n\
ou une police de libre passage au départ de la caisse de pensions (qui a \nlieu,\
\ normalement, à la fin d’un rapport de travail). Les assurés quittant \ndéfinitivement\
\ la Suisse pour un Etat hors UE/AELE peuvent demander le \npaiement en espèces\
\ de la prestation de sortie à la caisse de pensions de \nleur dernier employeur.\n\
Le paiement en espèces de la prestation de sortie qui correspond au mi -\nnimum\
\ LPP n’est pas possible lorsque l’assuré quitte la Suisse et qu’il reste \nassuré\
\ obligatoirement dans un Etat membre de l’UE pour les risques de \nvieillesse,\
\ de décès et d’invalidité. En revanche, la partie surobligatoire de la \nprestation\
\ de sortie peut être payée en espèces, sur demande de l’assuré. \nSi le versement\
\ en espèces de la prestation de sortie n’est pas possible, le \nmontant est versé\
\ sur un compte de libre passage ou une police de libre \npassage bloqués. Pour\
\ les personnes qui partent pour le Liechtenstein, la \nprestation de sortie est\
\ versée à l’institution de prévoyance de l’employeur \nliechtensteinois. Dans\
\ ce cas-là, il n’y a aucune possibilité de paiement en \nespèces.\f2924 A qui\
\ m’adresser en cas de sortie \nde la caisse de pensions ?\nLes personnes assurées\
\ doivent traiter avec la caisse de pensions de \nleur dernier employeur. Elles\
\ doivent conserver toutes les attestations \nd’assurance reçues de la caisse.\
\ Si un assuré quitte la Suisse sans indiquer \noù la caisse de pension peut verser\
\ la prestation de sortie et sans avoir reçu \nune prestation en espèces, la caisse\
\ de pension est tenue de transférer le \nmontant à l’institution supplétive,\
\ au plus tard dans les deux ans qui suivent \nle départ.\n25 Où obtenir plus\
\ de renseignements concernant \nles prestations de sortie ?\nLes caisses de\
\ pensions fournissent tout renseignement utile sur les pres -\ntations de libre\
\ passage non réclamées. La Centrale du 2e pilier renseigne \négalement les assurés\
\ pour l’exercice de leurs droits éventuels à une pres -\ntation :\nCentrale du\
\ 2e pilier \nFonds de garantie LPP \nCase postale 1023 \n3000 Berne 14 \n\
Tél. 031 380 79 75 \nE-mail: [email protected] \nwww.sfbvg.ch\nAssurance-maladie\n\
26 Assurance-maladie : quels droits, quelles obligations ?\nToute personne domiciliée\
\ en Suisse doit s’assurer auprès d’un assu -\nreur-maladie suisse dans les trois\
\ mois qui suivent sa prise de domicile ou \nsa naissance. Les salariés détachés\
\ provisoirement à l’étranger par leur em -\nployeur, ainsi que les membres de\
\ leur famille qui n’exercent pas d’activité \nlucrative, demeurent soumis à l’assurance\
\ obligatoire suisse. La personne \nassurée paie les primes et participe aux coûts\
\ des prestations dont elle \nbénéficie (franchise et quote-part). L’assurance-maladie\
\ sociale alloue des \nprestations en cas de maladie et de maternité, de même\
\ qu’en cas d’acci -\ndent si aucune assurance-accidents n’en assume la prise\
\ en charge. En cas \nde maladie, les assureurs-maladie remboursent les frais\
\ médicaux, moins la \nparticipation obligatoire aux coûts de la personne assurée.\
\ Les frais occasi -\nonnés par un traitement médical subi, en cas d’urgence,\
\ à l’étranger (hors\f30UE/AELE) sont pris en charge à concurrence du double du\
\ montant qui au -\nrait été remboursé en Suisse. Les assurés qui ne sont pas\
\ ressortissants d’un \nEtat membre de l’UE ou de l’AELE ont droit, en vertu d’une\
\ convention \nconclue entre l’Allemagne et la Suisse, à la prise en charge des\
\ frais de trai -\ntements médicaux subis en cas d’urgence lors d’un séjour en\
\ Allemagne.\nAssurance-accidents\n27 Assurance-accidents : quels droits, quelles\
\ obligations ?\nToute personne salariée en Suisse est obligatoirement assurée\
\ contre les \naccidents professionnels et non professionnels (en cas d’activité\
\ d’au moins \nhuit heures par semaine chez le même employeur), et contre les\
\ maladies \nprofessionnelles. Les primes de l’assurance contre les accidents\
\ et maladies \nprofessionnels sont à la charge de l’employeur ; celles de l’assurance\
\ contre \nles accidents non professionnels sont à la charge du travailleur et\
\ sont dé -\nduites de son salaire. Les conventions dérogatoires en faveur du\
\ travailleur \nsont réservées. L’assurance alloue entre autres le traitement\
\ médical, des \nindemnités journalières et des rentes. Les accidents non professionnels\
\ sur -\nvenus à l’étranger sont aussi couverts. Des renseignements peuvent être\
\ \nobtenus auprès de l’assureur-accidents de l’employeur.\nAssurance-chômage\n\
28 Assurance-chômage : quels droits, quelles obligations ?\nToute personne qui\
\ exerce une activité salariée en Suisse est obligatoire -\nment affiliée à l’assurance-chômage.\
\ La cotisation est payée moitié par \nle salarié (prélevée sur le salaire) et\
\ moitié par l’employeur. Les assurés au \nchômage peuvent percevoir des prestations\
\ s’ils résident en Suisse, s’ils \ns’annoncent comme chômeurs auprès de l’Office\
\ régional de placement et \ns’ils remplissent toutes les conditions d’octroi\
\ posées par la loi sur l’assuran -\nce-chômage. Des renseignements complémentaires\
\ peuvent être obtenus \nsur le site www.travail.swiss ou auprès du service de\
\ l’emploi du canton de \ndomicile. \nDes «info-brochures» sur l’assurance-chômage\
\ et des informations à \nl’attention des demandeurs d’emploi sont disponibles\
\ sur Internet à \nl’adresse www.travail.swiss .\f31Prestations transitoires\
\ pour chômeurs âgés\n29 Qu‘est-ce que les prestations transitoires pour les \
\ \nchômeurs âgés ?\nLes chômeurs qui arrivent en fin de droit de l ’assurance\
\ chômage après \nleur 60e anniversaire et qui ne trouvent plus de revenus suffisants\
\ peuvent \nbénéficier de prestations transitoires jusqu ’à leur retraite. \n\
30 Dans quelles circonstances ai-je droit à des \nprestations transitoires ?\n\
Vous pouvez toucher des prestations transitoires si \n• vous arrivez en fin de\
\ droit dans l’assurance-chômage au plus tôt pen -\ndant le mois au cours duquel\
\ vous atteignez l’âge de 60 ans ;\n• vous avez été assuré à l’assurance-vieillesse\
\ et survivants (AVS) en \nSuisse pendant au moins 20 ans, dont au moins cinq\
\ ans après l’âge de \n50 ans, et vous avez réalisé un revenu annuel provenant\
\ d’une activité \nlucrative d’un certain montant ;\n• vous disposez d’une fortune\
\ inférieure à 50 000 francs (pour une per -\nsonne seule) ou 100 000 francs (pour\
\ un couple), le bien immobilier \nservant d’habitation à son propriétaire n’étant\
\ pas pris en compte ;\n• vous avez votre domicile et votre résidence habituelle\
\ en Suisse ou dans \nun État membre de l’UE ou de l’AELE ;\n• vous présentez\
\ un excédent de dépenses, c’est-à-dire que vos dépen -\nses reconnues excèdent\
\ vos revenus déterminants (condition écono -\nmique).\nVous ne pouvez pas obtenir\
\ de prestations transitoires si\n• vous avez droit à une rente de l’AVS ou de\
\ l’AI ; \n• vous êtes en fin de droit avant votre 60e anniversaire ; \n• vous\
\ êtes en fin de droit avant le 1er juillet 2021.\nVous trouverez de plus amples\
\ informations à ce sujet dans le mémento \n5.03 - Prestations transitoires pour\
\ chômeurs âgés .\f32Allocations familiales\n31 Ai-je droit aux allocations familiales\
\ ?\nEn générale, toute personne exerçant une activité lucrative et assurée en\
\ \nSuisse a droit aux allocations familiales pour ses enfants qui vivent en Suisse\
\ \nsous condition qu’elle perçoive un revenu soumis aux cotisations AVS de \n\
7 560 francs par année, respectivement 630 francs par mois au minimum. \nCelles-ci\
\ comprennent :\n• une allocation pour enfant d’au moins 215 francs par mois\
\ ; elle est \noctroyée dès le mois de la naissance jusqu’à et y compris le mois\
\ au \ncours duquel l’enfant a son 16e anniversaire. Si l’enfant donne droit à\
\ \nl’allocation de formation avant son 16e anniversaire, celle-ci sera versée\
\ \nà la place de l’allocation pour enfant. L’allocation pour enfant est éga -\n\
lement octroyée pour les enfants âgés de 16 à 20 ans qui se trouvent \ndans l’impossibilité\
\ d’exercer une activité lucrative en raison d’une at -\nteinte à la santé ;\n\
• une allocation de formation d’au moins 268 francs par mois ; elle est \nversée\
\ à partir du mois au cours duquel l’enfant commence sa forma -\ntion post obligatoire,\
\ mais au plus tôt pour le mois au cours duquel il a \nson 15e anniversaire. L’enfant\
\ ayant atteint l’âge de 16 ans et se trou -\nvant encore à l’école obligatoire\
\ donne droit à l’allocation de formation \nà partir du mois suivant celui au\
\ cours duquel il fête son 16e anniver -\nsaire. L’allocation de formation est\
\ versée jusqu’à la fin de la formation, \nmais au plus tard jusqu’à la fin du\
\ mois au cours duquel l’enfant atteint \nl’âge de 25 ans.\nLes allocations familiales\
\ pour les enfants domiciliés à l’étranger ne sont \nexportées que si une convention\
\ de sécurité sociale conclue par la Suisse \nle prévoit.\f33Prestations de l‘APG\n\
32 Ai-je droit à l’allocation de maternité ? \nLes femmes considérées comme salariées\
\ ou indépendantes au moment de \nla naissance de l’enfant ont droit à l’allocation\
\ de maternité. Elles doivent \navoir été soumises à l’AVS/AI/APG suisse pendant\
\ les neuf mois qui ont \nprécédé immédiatement la naissance de l’enfant et avoir\
\ exercé une acti -\nvité lucrative pendant au moins cinq mois durant cette période.\n\
L’allocation de maternité est octroyée pendant 14 semaines (98 jours) et \nse\
\ monte à 80 % du revenu moyen réalisé avant l’accouchement, mais au \nplus à\
\ 220 francs par jour.\nVous trouverez de plus amples informations à ce sujet\
\ dans le mémento \n6.02 - Allocation de maternité . \n33 Ai-je droit à l’allocation\
\ à l‘autre parent ? \nLes pères, ou les épouses de la mère, considérées comme\
\ l’autre parent au \nsens de l’art. 255a, al. 1, CC, qui à la naissance de l’enfant\
\ exercent une ac -\ntivité professionnelle en tant que salarié(e) ou en qualité\
\ d’indépendant(e) \nont droit à l’allocation à l’autre parent. \nIls doivent\
\ avoir été soumis à l’assurance obligatoire au sens de la loi sur \nl’AVS pendant\
\ les neuf mois qui ont immédiatement précédé la naissance \nde l’enfant et avoir\
\ exercé une activité lucrative pendant au moins cinq \nmois durant cette période.\n\
La durée du congé de l’autre parent est de deux semaines (14 indemnités \njournalières\
\ au maximum). Ils touchent, à titre d’allocation pour perte de \ngain, le 80\
\ % du revenu moyen soumis à l’AVS qu’ils réalisaient avant la \nnaissance, mais\
\ au plus 220 francs par jour.\nVous trouverez de plus amples informations à ce\
\ sujet dans le mémento \n6.04 - Allocation à l’autre parent. \f3434 Ai-je droit\
\ à l’allocation pour prise en charge ? \nLes parents qui doivent interrompre\
\ leur activité lucrative pour prendre en \ncharge leur enfant mineur gravement\
\ atteint dans sa santé ont droit à un \ncongé de prise en charge.\nLe congé de\
\ prise en charge est de quatorze semaines (maximum 98 in -\ndemnités journalières).\
\ L’allocation pour prise en charge se monte à 80 % \ndu revenu moyen de l’activité\
\ lucrative obtenu immédiatement avant la \nperception des jours de congé, mais\
\ au plus à 220 francs par jour.\nVous trouverez de plus amples informations à\
\ ce sujet dans le mémento \n6.10 - Allocation pour prise en charge .\n35 Ai-je\
\ le droit à l’allocation d’adoption ?\nLes personnes exerçant une activité lucrative,\
\ qui accueillent en vue de son \nadoption un enfant de moins de quatre ans, ont\
\ droit à l’allocation d‘ad -\noption. L’adoption de l‘enfant d’un conjoint ou\
\ d’un partenaire ne donne \ndroit à aucune indemnité.\nVous devez avoir été soumis\
\ à l’assurance obligatoire au sens de la loi sur \nl’AVS pendant les neuf mois\
\ qui ont immédiatement précédé l‘adoption de \nl’enfant, et avoir exercé une\
\ activité lucrative durant au moins cinq mois \npendant cette période. \nLe congé\
\ d‘adoption est de deux semaines (maximum 14 indemnités jour -\nnalières). \n\
L’allocation d‘adoption se monte à 80 % du revenu AVS moyen de l’activité \nréalisé\
\ avant l‘accueil de l‘enfant en vue de son adoption mais au plus à 220 \nfrancs\
\ par jour. \nVous trouverez de plus amples informations à ce sujet dans le mémento\
\ \n6.11 - Allocation d’adoption .\f35Renseignements et autres \ninformations\n\
Ce mémento ne donne qu’un aperçu des dispositions en vigueur. Seu -\nles les dispositions\
\ légales et les conventions internationales font foi \ndans le règlement des\
\ cas individuels. Sur demande, la Caisse suisse de \ncompensation à Genève ainsi\
\ que les représentations suisses à l’étran -\nger (ambassade ou consulat) donnent\
\ de plus amples renseignements \net remettent les formulaires nécessaires.\n\
Les désignations d’état civil ont également les significations suivantes : \n\
• mariage : partenariat enregistré;\n• divorce : dissolution juridique du partenariat\
\ enregistré;\n• décès du conjoint : décès du partenaire enregistré.\nPublié par\
\ le Centre d’information AVS/AI en collaboration avec \nl’Office fédéral des\
\ assurances sociales.\nEdition décembre 2024. Reproduction autorisée, sous condition\
\ d’un \naccord écrit du Centre d’information AVS/AI.\nCe mémento est délivré\
\ par les caisses de compensation, leurs agences \net les offices AI. Numéro de\
\ commande 10.03. Il est également dispo -\nnible sous www.avs-ai.ch .\f36Informazioni\
\ per i cittadini degli Stati \ncon i quali la Svizzera non ha concluso \nuna\
\ convenzione di sicurezza sociale \n(Stati non contraenti)\nIn breve\nI cittadini\
\ degli Stati non contraenti che sono domiciliati in Svizzera pos -\nsono far\
\ valere il loro diritto alle prestazioni del sistema di sicurezza \nsociale\
\ svizzero. La sicurezza sociale in Svizzera è basata sul cosiddetto \nsistema\
\ dei tre pilastri. L’AVS, l’AI e le loro prestazioni complementari (PC) \nformano\
\ il 1° pilastro obbligatorio destinato a coprire i bisogni vitali. La \nprevidenza\
\ professionale (cassa pensioni) forma il 2° pilastro: permette alle \npersone\
\ assicurate e ai loro superstiti di mantenere in maniera appropriata il \nloro\
\ tenore di vita anteriore al momento in cui si verifica il caso di previdenza\
\ \n(vecchiaia, decesso o invalidità). Il 3° pilastro infine, la previdenza privata\
\ \n(risparmio, assicurazioni private), completa i primi due ed è facoltativo.\n\
Il presente opuscolo fornisce una visione generale dell’obbligo assicurativo \n\
e del diritto alle prestazioni per il 1° e 2° pilastro e si rivolge a tutti i\
\ cittadini \ndegli Stati con i quali la Svizzera non ha concluso nessuna convenzione\
\ di \nsicurezza sociale.\nLa Svizzera ha concluso una convenzione di sicurezza\
\ sociale con \ni seguenti Stati: \nStati membri dell’UE Cina (assoggettamento)\
\ Montenegro\nStati membri dell‘AELS Corea del Sud (assoggettamento) Regno Unito\n\
Albania Filippine San Marino\nAustralia Giappone Serbia \nBosnia e Erzegovina\
\ India (assoggettamento) Stati Uniti\nBrasile Israele Tunisia\nCanada/Quebec\
\ Kosovo Turchia\nCile Macedonia del Nord Uruguay\nL’elenco degli Stati con i\
\ quali la Svizzera ha concluso una conven -\nzione di sicurezza sociale è disponibile\
\ sul sito Internet dell’Ufficio \nfederale delle assicurazioni sociali (UFAS):\
\ \nIl presente opuscolo vi concerne se siete cittadine o cittadini di uno Stato\
\ \nche non figura nella lista (eccezione fatta per i/le superstiti delle cittadine\
\ o \ndei cittadini svizzeri o degli Stati contraenti).\f37Il sistema svizzero\
\ di sicurezza sociale\n1 Come è organizzato il sistema svizzero \ndi sicurezza\
\ sociale? \nLa sicurezza sociale in Svizzera comprende i seguenti rami assicurativi:\n\
• assicurazione per la vecchiaia, i superstiti e l’invalidità (AVS/AI)\n• previdenza\
\ professionale per la vecchiaia, i superstiti e l’invalidità (PP)\n• assicurazione\
\ malattie (AMal)\n• assicurazione contro gli infortuni (AINF)\n• assicurazione\
\ contro la disoccupazione (AD)\n• assegni familiari (AF)\n• indennità di perdita\
\ di guadagno per chi presta servizio militare, civile \no di protezione civile,\
\ di maternità, per l ’altro genitore, per i genitori \nche assistono un figlio\
\ con gravi problemi di salute dovuti a malattia o \ninfortunio o di adozione\
\ (IPG)\n• prestazioni complementari (PC)\n• prestazioni transitorie per i disoccupati\
\ anziani (PTD).\n2 Sono assicurato per l’AVS/AI?\nL’AVS e l’AI sono assicurazioni\
\ generali obbligatorie per tutti coloro che \nrisiedono o che esercitano un’attività\
\ lucrativa in Svizzera. All’obbligo legale \nassicurativo sottostanno per principio\
\ anche i cittadini stranieri. \n3 Come posso conoscere il mio numero AVS?\nIl\
\ numero AVS figura sulla tessera d’assicurazione malattie come pure sul \ncertificato\
\ d’assicurazione AVS/AI. Una persona assicurata, che non pos -\nsiede la tessera\
\ d’assicurazione malattie e neppure il certificato d’assicu -\nrazione AVS/AI\
\ può rivolgersi alla sua cassa di compensazione al fine di \nottenere il certificato\
\ d’assicurazione. Il certificato d’assicurazione deve \nessere conservato. La\
\ tessera d’assicurazione malattie o il certificato d’assi -\ncurazione AVS/AI\
\ deve essere presentato ad ogni nuovo datore di lavoro e, \nall’atto di richiedere\
\ prestazioni, all’ufficio competente.\nImportante: nella corrispondenza con le\
\ casse di compensazione il numero \nAVS deve sempre essere menzionato.\f38Contributi\n\
4 Quando inizia l’obbligo di contribuzione per l’AVS/AI?\nLe persone senza attività\
\ lucrativa assicurate all’AVS e all’AI pagano i cont -\nributi a partire dal\
\ 1° gennaio che segue il compimento dei 20 anni fino al \nraggiungimento dell’età\
\ di riferimento. Per coloro che esercitano un’attività \nlucrativa l’obbligo\
\ di versare contributi comincia con l’inizio dell’attività, ma \nnon prima del\
\ 1° gennaio che segue il compimento dei 17 anni.\n5 In che maniera vengono riscossi\
\ i contributi all’"
- source_sentence: Où dois-je déposer ma demande de moyens auxiliaires ?
sentences:
- "2.11 Contributi\nObbligo contributivo\nsulle indennità per lavoro\nridotto o\
\ per intemperie\nStato al 1° gennaio 2025\f2In breve\nL’assicurazione contro\
\ la disoccupazione versa alle aziende un’indennità \nper lavoro ridotto o interruzioni\
\ di lavoro, se quest’ultimo non può essere \nsvolto a causa di intemperie. \n\
Il datore di lavoro versa il salario ai lavoratori il giorno usuale di paga. La\
\ \nperdita di guadagno va pagata all’80 per cento. Il datore di lavoro regola\
\ \npoi le assenze dal lavoro con la cassa di disoccupazione.\nLa cassa di disoccupazione\
\ non copre la ‘perdita di salario complessiva, ma \nsoltanto l’80 per cento di\
\ essa.\nL’indennità per lavoro ridotto o per intemperie può essere cal -\ncolata\
\ in modo rapido e semplice grazie al calcolatore online: \nwww.ahv-iv.ch/r/lavororidotto\
\ .\nQuesto opuscolo informativo è destinato ai datori di lavoro e ai salariati.\n\
Contributi alle assicurazioni sociali per i datori di \nlavoro\n1 Quali contributi\
\ sociali vanno pagati? \nIn caso di diritto a indennità per lavoro ridotto o\
\ per intemperie, il datore di \nlavoro è tenuto a pagare i contributi alle assicurazioni\
\ sociali sulla base del \ntempo di lavoro normale, quindi del 100 per cento del\
\ salario. In questo \nmodo, i lavoratori mantengono interamente la propria copertura\
\ assicura -\ntiva.\nEssi devono versare gli importi seguenti:\n• contributi\
\ all’AVS, all’AI, alle indennità di perdita di guadagno (IPG) e \nall’assicurazione\
\ contro la disoccupazione (AD);\n• contributi alla cassa di compensazione per\
\ assegni familiari;\n• contributi alla previdenza professionale;\n• premi all’assicurazione\
\ contro gli infortuni.\nViene dedotta al salariato la sua parte dei contributi\
\ e dei premi, a condi -\nzione che esista un obbligo di contribuzione paritetico\
\ (datore di lavoro e \nsalariato pagano ciascuno la metà) e salvo accordi diversi.\n\
La cassa di disoccupazione rimborsa al datore di lavoro i contributi padronali\
\ \nall’AVS/AI/IPG/AD per i periodi computabili di perdita di lavoro unitamente\
\ \nal versamento delle indennità.\f32 Il lavoro ridotto si applica anche ai lavoratori\
\ a \ndomicilio?\nSì. L’assicurazione contro la disoccupazione versa un’indennità\
\ per lavoro \nridotto anche ai lavoratori a domicilio. Quale base di calcolo\
\ è applica -\nbile il guadagno medio mensile determinato dalla cassa di disoccupazione\
\ \n(secondo il modulo 1044Xi della SECO). Questo guadagno medio è utiliz -\n\
zato anche per il calcolo dei contributi delle assicurazioni sociali per i mesi\
\ \nin cui esiste un diritto a un’indennità per lavoro ridotto.\nEsempio di calcolo\n\
3 Lavoro ridotto in una fabbrica\nSalario in caso di tempo di lavoro normale\n\
Salario mensile, secondo contratto CHF 4 500.00\n6,4 % di deduzione per AVS, AI,\
\ IPG e AD -CHF 288.00\n2 % di deduzione per l’assicurazione contro \ngli infortuni\
\ non professionali-CHF 90.00\nDeduzione per la cassa pensioni -CHF 169.85\nSalario\
\ netto versato CHF 3 952.15\nTempo di lavoro in caso di lavoro ridotto o di sospensione\
\ del lavoro:\nIn questo caso, l’orario di lavoro mensile regolare nell’azienda\
\ ammonta \ncomplessivamente a 187 ore, con 22 giorni lavorativi e 42,5 ore settima\
\ -\nnali. A causa dell’orario ridotto, secondo il controllo dell’orario aziendale\
\ in \nquesto mese sono state lavorate solo 51 ore, il che significa che sono\
\ state \nperse 136 ore.\nLa perdita di guadagno dovuta all’orario ridotto viene\
\ calcolata sulla base \ndelle ore di lavoro perse. Poiché la cassa di disoccupazione\
\ copre l’80 per \ncento della perdita di salario, si determina la perdita di\
\ salario per le 136 ore \nperse e si calcola l’indennità per lavoro ridotto su\
\ questa base.\nTempo di lavoro medio mensile:\n52 settimane x 42,5 ore ÷ 12 mesi\
\ = 184,17 ore\nSalario di base per un’ora:\n4 500 franchi salario di base mensile\
\ ÷ 184,17 ore mensili = 24.43 franchi\nIndennità per lavoro ridotto (80 % del\
\ salario di base andato perso):\n136 ore di lavoro andate perse x 24.43 franchi\
\ l’ora x 0,8 = 2 658 franchi\f4Chiarimenti e altre \ninformazioniSalario in\
\ caso di lavoro ridotto\nSalario mensile, secondo contratto CHF 4 500.00\nRiduzione\
\ del salario lordo: 136 ore x CHF 24.43 -CHF 3 322.50\nSalario lordo per ore\
\ di lavoro prestate CHF 1 177.50\nIndennità per lavoro ridotto 80 % di CHF 3\
\ 322.50 CHF 2 658.00\nSalario lordo CHF 3 835.50\nDeduzioni in base al salario\
\ contrattuale di CHF 4 500\n6,4 % per AVS, AI, IPG e AD -CHF 288.00\n2 % per\
\ l’assicurazione contro gli infortuni \nnon professionali-CHF 90.00\nDeduzione\
\ per la cassa pensioni -CHF 169.85\nSalario netto netto compresa \nl’indennità\
\ per lavoro ridottoCHF 3 287.65\nIl datore di lavoro versa questo importo al\
\ salariato il giorno in cui viene \nnormalmente effettuato il pagamento. \nQuesto\
\ opuscolo informativo presenta solo una panoramica riassun -\ntiva. Per la valutazione\
\ dei singoli casi fanno stato esclusivamente le \ndisposizioni legali in vigore.\
\ Per ulteriori informazioni ci si può rivolgere \nalle casse di compensazione\
\ o alle loro agenzie. L’elenco delle casse di \ncompensazione è pubblicato all’indirizzo\
\ Internet www.avs-ai.ch .\nPer ulteriori informazioni sulle prestazioni dell’assicurazione\
\ contro \nla disoccupazione ci si può rivolgere alle casse di disoccupazione\
\ o \nconsultare il portale dell’assicurazione contro la disoccupazione, \nwww.lavoro.swiss\
\ .\nPubblicato dal Centro d’informazione AVS/AI in collaborazione con \nl’Ufficio\
\ federale delle assicurazioni sociali e l’ufficio di compensazione \ndell’assicurazione\
\ contro la disoccupazione.\nEdizione novembre 2024. La riproduzione, anche solo\
\ parziale, è \nautorizzata soltanto con il consenso scritto del Centro d’informazione\
\ \nAVS/AI.\nQuesto opuscolo informativo può essere richiesto alle casse di compen\
\ -\nsazione, alle loro agenzie e agli uffici AI. Numero di ordinazione 2.11/i.\
\ \nÈ disponibile anche su www.avs-ai.ch .\n2.11-25/01-I"
- "3.02 Prestations de l’AVS\nMoyens auxiliaires de l’AVS\nEtat au 1er janvier 2024\f\
2En bref\nVous avez droit à des moyens auxiliaires de l’AVS si vous avez votre\
\ domicile \nen Suisse, que vous avez atteint l’âge de référence et percevez une\
\ rente \ncomplète ou des prestations complémentaires. \nSi vous avez bénéficié\
\ de moyens auxiliaires de l’AI ou d’une contribution \nfinancière à leur acquisition\
\ avant de percevoir une rente complète de ma -\nnière anticipée ou avant d’atteindre\
\ l’âge de référence, vous conservez le \ndroit à ces prestations tant que les\
\ conditions d’octroi de l’AI sont remplies. \nDépôt de la demande\n1 Où dois-je\
\ déposer ma demande de moyens auxiliaires ?\nVous pouvez déposer votre demande\
\ de moyens auxiliaires, au moyen du \nformulaire 009.001 - Demande Moyens auxiliaires\
\ de l’AVS , auprès de \nl’office AI de votre canton de domicile.\n2 Quels moyens\
\ auxiliaires sont remboursés ?\nL’AVS prend généralement en charge, indépendamment\
\ du revenu et de la \nfortune de la personne assurée, les coûts des moyens auxiliaires\
\ suivants : \nMoyens auxiliaires Pris en charge Fréquence\nPerruques max. CHF\
\ 1 000.00 1 an\nChaussures orthopédiques sur mesure \net chaussures orthopédiques\
\ de série 75 % du prix net 1 an\nEpithèses faciales 75 % du prix net 2 ans\n\
Appareils orthophoniques après \nopération du larynx75 % du prix net 5 ans\n\
Appareils acoustiques monaural \nbinauralCHF \nCHF630.00 \n1 237.505 ans\n\
Lunettes-loupesmonoculaires \nbinoculairesCHF \nCHF590.00 \n900.005 ans\nTélé-loupes\
\ monoculaires \nbinoculairesCHF \nCHF1 334.00 \n2 048.005 ans\nFauteuils roulants\
\ sans moteur CHF 900.00 5 ans\nVous trouverez des informations complémentaires\
\ sur les appareils auditifs \ndans le mémento 3.07 - Appareils auditifs de l’AVS.\f\
3Contribution aux coûts dans le cadre des prestations \ncomplémentaires (PC)\n\
3 A quelle contribution financière puis-je prétendre si je \ntouche des PC ?\n\
Si vous avez atteint l’âge de référence, touchez une rente de vieillesse \ncomplète\
\ ou des prestations complémentaires et que vous avez besoin de \nmoyens auxiliaires,\
\ le service compétent examine si l’AVS peut, dans le cadre \nde ces prestations,\
\ prendre en charge la part des coûts que vous devriez as -\nsumer vous-même.\
\ Dans le cadre des prestations complémentaires, d’autres \nmoyens auxiliaires\
\ ainsi que certains appareils de soins et de traitement \npeuvent être financés\
\ ou remis à titre de prêt.\nRemise ou prise en charge de moyens auxiliaires par\
\ \nPro Senectute\n4 Quand puis-je faire appel à Pro Senectute ?\nVous pouvez\
\ faire appel à Pro Senectute si vous avez atteint l’âge de ré -\nférence ou touchez\
\ une rente de vieillesse complète de manière anticipée, \nmais que vous n’avez\
\ pas droit à des moyens auxiliaires de l’AVS ou pris \nen charge dans le cadre\
\ des prestations complémentaires. Pro Senectute \nest la plus grande organisation\
\ spécialisée et de services de Suisse pour les \npersonnes âgées. Elle octroie\
\ des contributions complémentaires ou remet \ndes moyens ou des appareils auxiliaires\
\ à titre de prêt. Il n’existe cependant \naucun droit légal à ces prestations.\n\
Si vous souhaitez bénéficier de telles prestations, vous pouvez vous adres -\n\
ser au bureau de consultation Pro Senectute le plus proche. Le secrétariat \n\
romand de Pro Senectute vous fournira toutes les informations souhaitées.\nPro\
\ Senectute Suisse \nRue du Simplon 23 / Case postale \n1800 Vevey \nTél. \
\ 021 925 70 10 \[email protected] \nwww.prosenectute.ch\f4Renseignements\
\ et autres \ninformations\nCe mémento ne donne qu’un aperçu des dispositions\
\ en vigueur. Pour \nle règlement des cas individuels, seules les dispositions\
\ légales font foi. \nLes caisses de compensation et leurs agences fournissent\
\ volontiers \ntous les renseignements utiles. Vous trouverez la liste complète\
\ des \ncaisses de compensation sur le site www.avs-ai.ch .\nPublié par le Centre\
\ d’information AVS/AI en collaboration avec \nl’Office fédéral des assurances\
\ sociales.\nRéimpression novembre 2024. Toute reproduction, même partielle, \n\
n’est autorisée qu’avec l’accord écrit du Centre d’information AVS/AI. \nCe mémento\
\ peut être obtenu auprès des caisses de compensation et \nde leurs agences ainsi\
\ qu’auprès des offices AI. Numéro de commande \n3.02/f. Il est également disponible\
\ sous www.avs-ai.ch .\n Plus d’informations, de publications et de vidéos explicatives.\n\
3.02-24/01-F"
- "2.14 Contributi\nLotta contro \nStato al 1° gennaio 2025il fallimento abusivo\f\
2In breve\nIn linea di principio, uno degli obiettivi principali del diritto fallimentare\
\ è \ndi dare agli imprenditori che hanno dichiarato fallimento l’opportunità\
\ di \nriprendere un’attività economica. Tuttavia in passato si è spesso constatato\
\ \nche il diritto fallimentare era anche sfruttato per sottrarsi ai propri obblighi.\
\ \nLa modifica della legge federale dell’11 aprile 1889 sulla esecuzione e sul\
\ \nfallimento (LEF), in vigore dal 1° gennaio 2025, mira ad impedire i fallimenti\
\ \nabusivi.\nFinora le istituzioni di diritto pubblico, tra cui figurano anche\
\ le casse di \ncompensazione, dovevano incassare i contributi attraverso la via\
\ del pigno -\nramento. Nel caso dell’esecuzione in via di pignoramento finora\
\ applicata, \ni debitori avevano un anno di tempo circa per pagare i contributi\
\ dovuti \nprima che venisse rilasciato l’attestato di carenza di beni dopo il\
\ pigno -\nramento. In caso di inadempienza, i titolari delle imprese non subivano\
\ \npraticamente alcuna conseguenza, ma potevano mantenere l’impresa e \ncontinuare\
\ la loro attività, persino quando ai loro creditori erano già stati \nconsegnati\
\ diversi attestati di carenza di beni dopo pignoramento. Com’è \npossibile? Non\
\ appena viene rilasciato un attestato di carenza di beni dopo \npignoramento,\
\ nella maggior parte dei casi il denaro dovuto ai creditori è \nda considerare\
\ perduto poiché difficilmente la situazione finanziaria del \ndebitore migliora.\n\
Nell’ambito dell’esecuzione in via di pignoramento applicata in futuro, i \ncreditori\
\ dovranno saldare i contributi dovuti entro un termine molto più \nstretto. Le\
\ imprese e i lavoratori indipendenti che non sono in grado di \nadempiere ai\
\ loro obblighi finanziari vengono invitati dal tribunale a saldare \nle fatture\
\ aperte nel quadro di una procedura d’esecuzione (generalmente \ntre mesi dopo\
\ la scadenza del termine di pagamento). Se non avviene alcun \npagamento, il\
\ tribunale competente dichiara fallimento e l’impresa viene \nchiusa.\nLe esecuzioni\
\ e le procedure di fallimento comportano per i debitori spese \ne difficoltà\
\ considerevoli che possono essere evitate pagando gli importi \nrichiesti entro\
\ il termine previsto o concludendo un relativo accordo di pa -\ngamento.\nIl\
\ presente opuscolo informa sugli aspetti principali della procedura di falli\
\ -\nmento e sulle loro conseguenze.\f3Crediti contributivi AVS in sospeso\n1\
\ Chi è soggetto alla procedura di fallimento?\nSono soggetti alla procedura di\
\ fallimento le persone giuridiche e i lavo -\nratori indipendenti iscritti nel\
\ registro di commercio. La procedura non \nconcerne invece le persone iscritte\
\ a una cassa di compensazione come \npersone senza attività lucrativa o datori\
\ di lavoro non tenuti a pagare con -\ntributi.\n2 Quali opzioni vi sono dopo\
\ la comminatoria di \nfallimento?\nSe il debitore non fa opposizione contro\
\ il precetto esecutivo notificato \no la stessa è stata rigettata, la cassa di\
\ compensazione può chiedere la \ncontinuazione dell’esecuzione. L’ufficio di\
\ esecuzione gli notifica la com -\nminatoria di fallimento. Dopo la comminatoria\
\ di fallimento, il debitore ha \nancora la possibilità di pagare tutti i contributi\
\ in sospeso, comprese le \nspese di procedura.\nLa cassa di compensazione può\
\ presentare la richiesta di fallimento al giu -\ndice del fallimento al più presto\
\ 20 giorni dopo la notifica della commi -\nnatoria di fallimento e al più tardi\
\ 15 mesi dopo la notifica del precetto \nesecutivo. Il giudice del fallimento\
\ verifica se sono adempiute le condizioni \nper l’apertura della procedura di\
\ fallimento. Fino alla presentazione della \nrichiesta di fallimento da parte\
\ della cassa di compensazione, il debitore \npuò evitare il fallimento pagando\
\ gli arretrati. Spetta alla cassa di compen -\nsazione decidere se il pagamento\
\ può essere effettuato con un versamento \nuna tantum o a rate.\n3 Come evitare\
\ il fallimento?\nPer evitare il fallimento, il debitore deve pagare tutti gli\
\ arretrati prima che \nla cassa di compensazione presenti la richiesta di fallimento\
\ o che il giudice \ndel fallimento dichiari il fallimento.\n4 Quali sono le conseguenze\
\ di un fallimento?\nDopo la dichiarazione di fallimento, il debitore non può\
\ più portare avanti \nla sua impresa nella stessa forma. Il patrimonio pignorabile\
\ diviene massa \ndel fallimento e non può più essere utilizzato. La massa del\
\ fallimento viene \nutilizzata per coprire i crediti aperti. Dopo la liquidazione,\
\ le persone giuri -\ndiche vengono cancellate dal registro di commercio e cessano\
\ di esistere.\f4Chiarimenti e altre \ninformazioniSe il debitore è iscritto\
\ nel registro di commercio quale organo di una per -\nsona giuridica, la cassa\
\ di compensazione valuta se far valere nei suoi con -\nfronti una domanda di\
\ risarcimento e se depositare una denuncia penale.\n5 Dopo un fallimento è possibile\
\ riprendere un’attività?\nSe il debitore è stato condannato da un tribunale penale\
\ a una pena de -\ntentiva superiore a sei mesi, il giudice può interdirgli in\
\ tutto o in parte \nl’esercizio dell’attività in questione o di altre attività\
\ analoghe per un tempo \nda sei mesi a cinque anni (art. 67 CP). Le autorità\
\ di perseguimento penale \nannunciano sistematicamente le interdizioni di attività\
\ alle autorità del re -\ngistro di commercio. Se le società e le persone interessate\
\ non eseguono \nda sole tale interdizione, l’Ufficio del registro di commercio\
\ deve disporre \nd’ufficio misure adeguate, per esempio la cancellazione delle\
\ persone inte -\nressate dal registro di commercio.\nQuesto opuscolo informativo\
\ presenta solo una panoramica riassun -\ntiva. Per la valutazione dei singoli\
\ casi fanno stato esclusivamente le \ndisposizioni legali in vigore. Per ulteriori\
\ informazioni ci si può rivolgere \nalle casse di compensazione o alle loro agenzie.\
\ L’elenco delle casse di \ncompensazione è pubblicato all’indirizzo Internet\
\ www.avs-ai.ch .\nPubblicato dal Centro d’informazione AVS/AI in collaborazione\
\ con \nl’Ufficio federale delle assicurazioni sociali.\nEdizione ottobre 2024.\
\ La riproduzione, anche solo parziale, è autoriz -\nzata soltanto con il consenso\
\ scritto del Centro d’informazione AVS/AI. \nÈ disponibile anche su www.avs-ai.ch\
\ . \n2.14-25/01-I"
- source_sentence: Quali sono i requisiti per ottenere una rendita vedovile per le
donne sposate?
sentences:
- "4.12 Leistungen der IV\nEingliederungsorientierte \nBeratung, Früherfassung \
\ \nStand am 1. Januar 2024und Frühintervention\f2Auf einen Blick\nDie eingliederungsorientierte\
\ Beratung, die Früherfassung und die Frühin -\ntervention sind präventive Mittel\
\ der Invalidenversicherung (IV) und drei \nverschiedene Phasen im IV-Verfahren,\
\ die es zu unterscheiden gilt.\nMit der eingliederungsorientierten Beratung bietet\
\ die IV-Stelle unabhän -\ngig von einem konkreten Fall Beratungsgespräche und\
\ allgemeine Informa -\ntionen zur IV an. \nIm Rahmen der Früherfassung sollen\
\ arbeitsunfähige, von Arbeitsunfähig -\nkeit oder von Invalidität bedrohte Personen\
\ so rasch wie möglich mit Fach -\npersonen der IV in Kontakt treten. Sobald der\
\ Kontakt hergestellt ist, wird \nmöglichst schnell darüber entschieden, ob eine\
\ IV-Anmeldung notwendig \nist. \nSobald eine IV-Anmeldung eingereicht wird, prüft\
\ die zuständige IV-Stelle \ngemeinsam mit der versicherten Person und den involvierten\
\ Partnern, ob \ngeeignete Frühinterventionsmassnahmen den Erhalt des Arbeitsplatzes\
\ \noder eine rasche Reintegration ins Arbeitsleben ermöglichen können.\nDieses\
\ Merkblatt informiert Versicherte, Eingliederungsakteure sowie Mel -\ndeberechtigte\
\ über die eingliederungsorientierte Beratung, die Früherfas -\nsung und die Frühintervention.\n\
\ \f3Eingliederungsorientierte Beratung\n1 Was ist eine eingliederungsorientierte\
\ Beratung?\nDie eingliederungsorientierte Beratung umfasst niederschwellige und\
\ fall- \nunabhängige Beratungsgespräche durch die IV-Stelle. Darunter fallen\
\ bei -\nspielsweise allgemeine Informationen über den Auftrag und die Leistungen\
\ \nder IV, über den Umgang mit Erkrankungen am Arbeitsplatz, die Meldung \nzur\
\ Früherfassung oder die Anmeldung für IV-Leistungen.\n2 An wen richtet sich die\
\ eingliederungsorientierte \nBeratung?\nDie eingliederungsorientierte Beratung\
\ richtet sich an versicherte Perso -\nnen, Arbeitgebende, behandelnde Ärzte sowie\
\ betroffene Akteure des \nSchul- und Bildungswesens auf deren Ersuchen hin.\n\
3 Besteht ein Rechtsanspruch auf eingliederungs- \norientierte Beratung?\nEs\
\ besteht kein Rechtsanspruch auf eingliederungsorientierte Beratung.\f4Früherfassungsphase\
\ \nFrüherfassung\n4 Was ist eine Früherfassung?\nDie Früherfassung zielt darauf\
\ ab, dass die IV-Stelle so früh wie möglich mit \nPersonen in Kontakt tritt,\
\ die aus gesundheitlichen Gründen arbeitsunfähig \noder von Arbeitsunfähigkeit\
\ bedroht sind und bei denen die Gefahr einer \nChronifizierung der gesundheitlichen\
\ Beschwerden besteht. Kommt die \nIV-Stelle zum Schluss, dass ohne geeignete\
\ Massnahmen eine Invalidität \ndroht, fordert sie die betroffene Person auf,\
\ sich bei der IV anzumelden. \nDie Früherfassung ermöglicht der IV ein rasches\
\ Eingreifen und präventives \nVorgehen zugunsten der beruflichen Eingliederung.\
\ \n5 Können Jugendliche zur Früherfassung gemeldet \nwerden?\nJa. Jugendliche\
\ und junge erwachsene Personen zwischen 13 und 25 Jah -\nren, können sich melden\
\ oder gemeldet werden, wenn sie:\n• von Invalidität bedroht sind, \n• noch\
\ keine Erwerbstätigkeit ausgeübt haben und \n• sich in einem kantonalen Brückenangebot\
\ befinden oder von einer \nkantonalen Koordinationsstelle für Jugendliche in\
\ ihrer beruflichen Ein -\ngliederung unterstützt werden.\nAuch Jugendliche, die\
\ bereits erwerbstätig waren und erwachsene Perso -\nnen, die arbeitsunfähig oder\
\ von Arbeitsunfähigkeit bedroht sind, können \nsich melden oder gemeldet werden.\n\
Meldung zur Früherfassung\n6 Wer kann eine Meldung einreichen?\nFolgende Personen\
\ und Instanzen können eine Meldung einreichen:\n• die versicherte Person sowie\
\ ihre gesetzliche Vertretung\n• die mit der versicherten Person im gemeinsamen\
\ Haushalt lebenden \nFamilienangehörigen \n• die Arbeitgebenden\n• die behandelnden\
\ Ärzte und Chiropraktiker \n• der Krankentaggeldversicherer\f5• die privaten\
\ Versicherungsunternehmen, die eine Krankentaggeld- \noder Rentenversicherung\
\ anbieten\n• der Unfallversicherer\n• die Einrichtungen der beruflichen Vorsorge\n\
• die Arbeitslosenversicherung\n• die Sozialhilfeorgane\n• die Militärversicherung\n\
• der Krankenversicherer\n• die kantonalen Instanzen und Durchführungsstellen,\
\ die für die Unter -\nstützung und die Förderung der beruflichen Eingliederung\
\ von Jugend -\nlichen zuständig sind\n7 Wie erfolgt die Meldung?\nDie Meldung\
\ ist schriftlich bei der IV-Stelle des Wohnsitzkantons der ver -\nsicherten Person\
\ einzureichen. Das Formular kann bei den IV-Stellen, den \nAusgleichskassen und\
\ deren Zweigstellen sowie unter www.ahv-iv.ch be-\nzogen werden.\n8 Wird die\
\ versicherte Person vorgängig über die \nMeldung informiert?\nJa. Personen und\
\ Instanzen, die eine versicherte Person zur Früherfassung \nbei der IV-Stelle\
\ melden, müssen diese vorgängig darüber informieren.\n9 Ist die Meldung zur Früherfassung\
\ eine Anmeldung für \nIV-Leistungen?\nNein. Die Meldung zur Früherfassung gilt\
\ nicht als Anmeldung für Leistun -\ngen der IV. In der Früherfassungsphase werden\
\ keine Leistungen der IV \nzugesprochen.\f6Früherfassungsgespräch\n10 Das Meldeformular\
\ wurde der IV-Stelle eingereicht, \nwie geht es jetzt weiter?\nDie IV-Stelle\
\ kann die gemeldete Person zu einem Gespräch einladen. In \ndiesem wird\n• sie\
\ über den Zweck der Früherfassung informiert,\n• eine Analyse ihrer medizinischen,\
\ beruflichen und sozialen Situation \nvorgenommen,\n• sie darüber aufgeklärt,\
\ welche Informationen die IV-Stelle bei wem ein -\nholt,\n• geprüft, ob eine\
\ IV-Anmeldung angezeigt ist.\n11 Wer kann an diesem Gespräch teilnehmen?\nMit\
\ dem Einverständnis der versicherten Person können Dritte am Gespräch \nteilnehmen,\
\ zum Beispiel die Person/Institution, welche den Fall gemeldet \nhat und/oder\
\ Arbeitgebende. Es steht der versicherten Person ebenfalls \noffen, sich von\
\ einer Vertrauensperson begleiten zu lassen. Hält es die IV-\nStelle für angezeigt,\
\ kann auch ein Arzt oder eine Ärztin des regionalen \närztlichen Dienstes (RAD)\
\ hinzugezogen werden.\n12 Wann erfolgt kein Gespräch?\nGeht aus der Meldung bereits\
\ eindeutig hervor, dass eine sofortige \nIV-Anmeldung angezeigt oder die IV\
\ nicht zuständig ist, wird auf ein \nGespräch verzichtet.\n13 Wo kann die IV-Stelle\
\ weitere Informationen einholen?\nGenügen die Informationen aus dem Gespräch\
\ für den Entscheid nicht, \nkann die IV-Stelle mit der Vollmacht der versicherten\
\ Person weitere Infor -\nmationen einholen, unter anderem bei medizinischem Fachpersonal,\
\ wei -\nteren Versicherungen, Arbeitgebenden oder der Sozialhilfe.\nEnde der\
\ Früherfassungsphase\n14 Wann endet die Früherfassung?\nMit dem Eingang der IV-Anmeldung\
\ oder der Mitteilung an die versicherte \nPerson, es sei keine solche nötig,\
\ endet die Früherfassungsphase. \f7Anmeldung für IV-Leistungen\n15 Wer kann eine\
\ IV-Anmeldung einreichen?\nGrundsätzlich muss die versicherte Person die IV-Anmeldung\
\ selbst einrei -\nchen. Auch ihr gesetzlicher Vertreter bzw. die Behörden oder\
\ Dritte, wel -\nche die Person regelmässig unterstützen bzw. dauernd betreuen,\
\ können \neinen Anspruch auf Leistungen der IV geltend machen.\n16 Wie erfolgt\
\ die Anmeldung?\nDie Anmeldung muss bei der IV-Stelle des Wohnsitzkantons der\
\ versicher -\nten Person eingereicht werden. Das entsprechende Antragsformular\
\ kann \nbei den IV-Stellen, den Ausgleichskassen und deren Zweigstellen sowie\
\ un -\nter www.ahv-iv.ch bezogen werden. \nFrühinterventionsphase \nFrühintervention\n\
17 Was ist das Ziel der Frühintervention?\nZiel der Frühintervention ist es, durch\
\ rasches Handeln die Arbeits- und \nErwerbsfähigkeit der betroffenen Person möglichst\
\ zu erhalten oder zu \nverbessern. Jugendliche, die bereits erwerbstätig waren,\
\ Arbeitsunfähige \noder von einer länger dauernden Arbeitsunfähigkeit bedrohte\
\ Erwachsene, \nwerden dabei unterstützt, ihren Arbeitsplatz im bisherigen Betrieb\
\ beizu -\nbehalten, bzw. betriebsintern oder in einem anderen Betrieb einen neuen\
\ \nArbeitsplatz zu übernehmen. \nMit den Frühinterventionsmassnahmen kann die\
\ IV auch Jugendliche und \njunge Erwachsene, die noch nicht erwerbstätig waren\
\ und von einer Inva -\nlidität bedroht sind, frühzeitig auf dem Weg in eine berufliche\
\ Ausbildung \noder in eine erste Anstellung im ersten Arbeitsmarkt unterstützen.\
\ \nDie Frühinterventionsphase beginnt mit der Einreichung der IV-Anmeldung \n\
und erstreckt sich maximal über eine Dauer von zwölf Monaten.\f8Bestandsaufnahme\n\
18 Was beinhaltet die Bestandsaufnahme?\nNach Eingang der IV-Anmeldung nimmt die\
\ IV-Stelle eine Bestandsauf -\nnahme vor. Diese hat zum Ziel, ein umfassendes\
\ Bild von der Gesamt- \nsituation der versicherten Person zu erhalten, das nebst\
\ den gesundheit -\nlichen und beruflichen Aspekten, den Ressourcen und Einschränkungen\
\ \nauch die familiäre, soziale und finanzielle Situation mitberücksichtigt. Ge\
\ -\nstützt auf diese Informationen entscheidet die IV-Stelle, ob Frühinterventi\
\ -\nonsmassnahmen, Integrationsmassnahmen oder Massnahmen beruflicher \nArt angezeigt\
\ sind. \nAuf die Bestandsaufnahme kann verzichtet werden, wenn aus der IV-An\
\ -\nmeldung hervorgeht, dass die Invalidenversicherung nicht zuständig, oder\
\ \ndie Eingliederung nicht möglich ist oder wenn nicht die Frage der Einglie\
\ -\nderung oder der Rente im Zentrum steht, sondern ein Hilfsmittel oder eine\
\ \nHilflosenentschädigung.\n19 Wer kann an der Bestandsaufnahme teilnehmen?\n\
Die versicherte Person kann sich während der Bestandsaufnahme, die in \nForm eines\
\ oder mehrerer Gespräche stattfindet, von weiteren Personen \nbegleiten lassen\
\ (z.B. Arbeitgebenden, behandelnde Ärztin oder behan -\ndelnder Arzt). Die Gespräche\
\ werden von der Eingliederungsfachperson \ngeführt. Hält es die IV-Stelle für\
\ angezeigt, kann auch ein Arzt oder eine \nÄrztin des regionalen ärztlichen Dienstes\
\ (RAD) hinzugezogen werden. \nEingliederungsplan \n20 Was beinhaltet der Eingliederungsplan?\n\
Gestützt auf die Bestandsaufnahme wird ein auf die versicherte Person \nzugeschnittener\
\ Eingliederungsplan ausgearbeitet. Der Eingliederungsplan\n• enthält die zu\
\ erreichenden Ziele und die vorgesehenen Massnahmen,\n• regelt die Kooperation\
\ zwischen den beteiligten Parteien,\n• definiert die Verantwortlichkeiten und\
\ Fristen.\nAuf Basis des Eingliederungsplanes kann eine Zielvereinbarung erstellt\
\ wer -\nden. \f921 Was sind Massnahmen der Frühintervention?\nMassnahmen der\
\ Frühintervention sind: \nWährend der obligatorischen Schulzeit ab dem vollendeten\
\ 13. Altersjahr: \n• Berufsberatung\n• Arbeitsvermittlung (Unterstützung bei\
\ der Suche nach einem Ausbil -\ndungsplatz)\nFür Jugendliche nach der obligatorischen\
\ Schulzeit und für Erwachsene: \n• Anpassungen des Arbeitsplatzes \n• Ausbildungskurse\
\ \n• Arbeitsvermittlung (Unterstützung beim Arbeitsplatzerhalt und bei der \n\
Stellensuche)\n• Berufsberatung\n• Sozialberufliche Rehabilitation \n• Beschäftigungsmassnahmen\
\ \n• Beratung und Begleitung\n22 Besteht ein Rechtsanspruch auf \nFrühinterventionsmassnahmen?\n\
Nein. Es besteht kein Rechtsanspruch auf Frühinterventionsmassnahmen. \n23 Besteht\
\ Anspruch auf ein IV-Taggeld?\nNein. Während der Durchführung dieser Massnahmen\
\ werden keine \nTaggelder der IV ausbezahlt.\nEnde der Frühinterventionsphase\n\
24 Wann endet die Frühintervention?\nDer Frühinterventionsprozess wird abgeschlossen\
\ durch einen Entscheid \nin Form \n• einer Mitteilung, dass der versicherten\
\ Person Integrationsmassnah -\nmen oder Massnahmen beruflicher Art gewährt werden,\n\
• der Mitteilung, die Rentenfrage werde geprüft, oder\n• einer ablehnenden Leistungsverfügung.\f\
10Auskünfte und weitere \nInformationen\nDieses Merkblatt vermittelt nur eine\
\ Übersicht. Für die Beurteilung von \nEinzelfällen sind ausschliesslich die gesetzlichen\
\ Bestimmungen mass -\ngebend. Die IV-Stellen, die Ausgleichskassen und ihre Zweigstellen\
\ \ngeben gerne Auskunft. Ein Verzeichnis aller Ansprechpartner finden \nSie\
\ unter www.ahv-iv.ch .\nHerausgegeben von der Informationsstelle AHV/IV in Zusammenarbeit\
\ \nmit dem Bundesamt für Sozialversicherungen.\nNachdruck November 2024. Auch\
\ auszugsweiser Abdruck ist nur mit \nschriftlicher Einwilligung der Informationsstelle\
\ AHV/IV erlaubt. \nDieses Merkblatt kann bei den Ausgleichskassen und deren Zweig-\
\ \nstellen sowie den IV-Stellen bezogen werden. Bestellnummer 4.12/d. \nEs ist\
\ ebenfalls unter www.ahv-iv.ch verfügbar.\n Weitere Informationen, Publikationen\
\ und Erklärvideos.\n4.12-24/01-D"
- "3.03 Prestazioni dell‘AVS\nRendite per superstiti\ndell’AVS\nStato al 1° gennaio\
\ 2025\f2In breve\nLe rendite per superstiti hanno lo scopo di evitare che, al\
\ decesso del \nconiuge, di uno o di entrambi i genitori, i superstiti vengano\
\ a trovarsi in \ngravi difficoltà finanziarie. Vi sono tre categorie di rendite\
\ per superstiti: \n• le rendite per vedove,\n• le rendite per vedovi,\n• le\
\ rendite per orfani.\nAffinché una persona abbia diritto a una rendita per superstiti,\
\ è necessario \nche alla persona deceduta si possa conteggiare almeno un anno\
\ di contri -\nbuzione completo.\nSi parla di anno di contribuzione completo quando:\n\
• la persona deceduta ha versato contributi complessivamente per un \nanno, oppure\n\
• la persona deceduta era assicurata e il coniuge ha versato il doppio del \n\
contributo minimo almeno per un anno, oppure\n• alla persona deceduta si possono\
\ conteggiare accrediti per compiti \neducativi o assistenziali.\f3Rendite per\
\ vedove\n1 Quali sono i requisiti che devono soddisfare le donne \nsposate per\
\ avere diritto alla rendita vedovile?\nLe donne sposate il cui marito o la cui\
\ moglie è deceduto/a hanno diritto a \nuna rendita vedovile se all’insorgere\
\ della vedovanza:\n• hanno uno o più figli (di qualsiasi età). Sono considerati\
\ come figli an -\nche i figli del coniuge deceduto che vivono nell’economia domestica\
\ \ncomune e, in seguito alla sua morte, hanno diritto a una rendita per \norfani.\
\ Lo stesso vale per gli affiliati precedentemente affidati alle cure \ndei coniugi,\
\ a condizione che siano in seguito adottati dalla vedova. \nÈ considerata vedova\
\ con figli anche la moglie della madre, se al mo -\nmento della nascita del figlio\
\ era sposata con la madre e se il figlio è \nstato concepito secondo le disposizioni\
\ della legge del 18 dicembre \n1998 sulla medicina della procreazione, e quindi\
\ sussiste un rapporto \ndi filiazione (art. 255a cpv. 1 CC), o\n• hanno compiuto\
\ 45 anni e sono state sposate per almeno cinque anni. \nSe hanno contratto più\
\ matrimoni, si tiene conto della durata comples -\nsiva dei diversi matrimoni.\
\ Per le coppie di persone dello stesso sesso \nche hanno convertito l’unione\
\ domestica registrata in matrimonio la \ndurata di quest’ultima viene aggiunta\
\ agli anni di matrimonio.\n \f42 Quali sono i requisiti che devono soddisfare\
\ le donne \ndivorziate per avere diritto alla rendita vedovile?\nLe donne divorziate\
\ il cui ex marito o la cui ex moglie è deceduto/a hanno \ndiritto a una rendita\
\ vedovile:\n• se hanno figli e il matrimonio è durato almeno dieci anni,\n•\
\ se il divorzio è intervenuto dopo che esse hanno compiuto 45 anni e il \nmatrimonio\
\ è durato almeno dieci anni,\n• se il figlio più giovane ha compiuto 18 anni\
\ dopo che la madre divor -\nziata ne ha compiuti 45.\nLe donne divorziate che\
\ non soddisfano alcuna di queste condizioni hanno \ndiritto a una rendita vedovile\
\ finché il figlio più giovane compie 18 anni.\nÈ considerata vedova con figli\
\ anche l’ex moglie della madre, se al mo -\nmento della nascita del figlio era\
\ sposata con la madre e se il figlio è stato \nconcepito secondo le disposizioni\
\ della legge del 18 dicembre 1998 sulla \nmedicina della procreazione, e quindi\
\ sussiste un rapporto di filiazione (art. \n255a cpv. 1 CC).\nSe l’unione domestica\
\ registrata è stata convertita in matrimonio, la sua \ndurata viene aggiunta\
\ agli anni di matrimonio.\nRendite per vedovi\n3 Quali sono i requisiti per il\
\ diritto alla rendita vedovile \ncome uomo sposato o come partner registrato?\n\
Gli uomini sposati la cui moglie o il cui marito è deceduta/o hanno diritto \n\
a una rendita se all’insorgere della vedovanza hanno uno o più figli (di \nqualsiasi\
\ età). Sono considerati come figli anche i figli del coniuge deceduto \nche vivono\
\ nell’economia domestica comune e, in seguito alla sua morte, \nhanno diritto\
\ a una rendita per orfani. Lo stesso vale per gli affiliati prece -\ndentemente\
\ affidati alle cure dei coniugi, a condizione che siano in seguito \nadottati\
\ dal vedovo. \nSe un partner registrato decede, il partner superstite è equiparato,\
\ a pre -\nscindere dal sesso, a un vedovo. \f5Nella sentenza dell’11 ottobre\
\ 2022, la Grande Camera della Corte euro -\npea dei diritti dell’uomo (Corte\
\ EDU) ha stabilito che nel caso in esame vi \nfosse una disparità di trattamento\
\ contraria alla Convenzione europea dei \ndiritti dell’uomo (CEDU), in quanto\
\ la rendita vedovile del ricorrente era \nstata soppressa quando il figlio più\
\ giovane aveva raggiunto la maggiore \netà, il che non sarebbe avvenuto per una\
\ vedova nella stessa situazione. \nLa Svizzera deve conformarsi a questa sentenza,\
\ passata in giudicato l’11 \nottobre 2022, e porre fine alla violazione del diritto\
\ constatata dalla Corte \nEDU. Le basi legali devono quindi essere adeguate tenendo\
\ conto della \nprocedura legislativa. Quest’ultima può essere relativamente lunga\
\ e si \nsvolgerà quindi solo in un secondo momento. Fino ad allora si applicherà\
\ \nuna regolamentazione transitoria per i vedovi con figli, entrata in in vigore\
\ \nl’11 ottobre 2022, secondo la quale il diritto alla rendita per vedovi non\
\ si \nestinguerà più al compimento del 18° anno d’età da parte del figlio più\
\ \ngiovane e la rendita verrà corrisposta oltre tale età. \nLa sentenza della\
\ Corte EDU non si applica né ai vedovi né ai divorziati \nsenza figli. Sulla\
\ base di questa sentenza, i vedovi senza figli continuano \na non avere diritto\
\ alla rendita vedovile e, nel caso di uomini divorziati, il \ndiritto alla stessa\
\ si estingue in ogni caso quando il figlio più giovane rag -\ngiunge la maggiore\
\ età. La sentenza della Corte EDU non si applica nem -\nmeno ai casi in cui la\
\ rendita per vedovi sia stata soppressa con decisione \npassata in giudicato\
\ prima dell’11 ottobre 2022 in seguito al compimento \ndel 18° anno d’età da\
\ parte del figlio più giovane. \n4 Quali sono i requisiti che devono soddisfare\
\ gli uomini \ndivorziati per avere diritto alla rendita vedovile? \nGli uomini\
\ divorziati la cui ex moglie o il cui ex marito è deceduta/o hanno \ndiritto\
\ a una rendita vedovile finché hanno figli di età inferiore ai 18 anni. \f6Rendite\
\ per orfani\n5 Quali sono i requisiti per il diritto alla rendita per \norfani?\n\
In caso di decesso di uno dei genitori, l’AVS versa ai figli una rendita per \n\
orfani. \nSe al momento della nascita del figlio la madre è sposata con una donna\
\ e \nil figlio è stato concepito secondo le disposizioni della legge del 18 dicem\
\ -\nbre 1998 sulla medicina della procreazione, la moglie della madre è consi\
\ -\nderata l’altro genitore (art. 255a cpv. 1 CC). In questi casi, alla morte\
\ della \nmoglie della madre il figlio ha diritto a una rendita per orfani.\n\
In caso di decesso di entrambi i genitori, i figli hanno diritto a due rendite\
\ \nper orfani: una per ciascun genitore. Il diritto alla rendita per orfani si\
\ estin -\ngue al 18° compleanno o al termine della formazione, ma al più tardi\
\ al \n25° compleanno. Per gli affiliati vigono disposizioni particolari. I figli\
\ che \ndurante la formazione conseguono un reddito lordo dell’attività lucrativa\
\ \nsuperiore a 30 240 franchi non hanno diritto a una rendita per orfani.\nInizio\
\ e fino del diritto\n6 Quando nasce il diritto a una rendita per superstiti?\n\
Il diritto alla rendita per superstiti nasce il primo giorno del mese successivo\
\ \na quello del decesso del coniuge (o dell’ex coniuge) o del genitore.\n7 Quando\
\ si estingue il diritto a una rendita per \nsuperstiti?\nIl diritto alla rendita\
\ per superstiti si estingue alla fine del mese in cui le con -\ndizioni non sono\
\ più adempiute. In caso di nuove nozze cessa il diritto alla \nrendita vedovile.\
\ Il diritto alle rendite per orfani continua invece a sussistere.\f7Concorso\
\ con altre prestazioni\n8 Quale delle rendite viene versata?\nSe una persona\
\ adempie contemporaneamente le condizioni poste per una \nrendita per superstiti\
\ e per una rendita di vecchiaia o d’invalidità, si versa \nsolo la rendita più\
\ elevata.\nRiscossione delle rendite per superstiti\n9 Dove far valere il proprio\
\ diritto a una rendita per \nsuperstiti?\nChi intende far valere il proprio\
\ diritto alla rendita per superstiti deve rivol -\ngersi alla cassa di compensazione\
\ che, per ultima, ha incassato i contributi \ndella persona deceduta. Il modulo\
\ 318.371 – Richiesta di una rendita per \nsuperstiti può essere ottenuto presso\
\ le casse di compensazione e le loro \nagenzie o sul sito internet www.avs-ai.ch\
\ . In seguito la domanda deve es -\nsere inoltrata presso la cassa di compensazione\
\ competente.\nGli assicurati che hanno compiuto periodi assicurativi in Svizzera\
\ e in uno \no più Stati membri dell’UE o dell’AELS possono semplicemente inoltrare\
\ \nla richiesta nel Paese di domicilio: con questa richiesta prenderanno avvio\
\ \nanche le procedure necessarie in tutti gli altri Paesi interessati.\nSe la\
\ persona deceduta non ha versato contributi AVS, il diritto a rendite \nper superstiti\
\ dev’essere fatto valere presso la cassa di compensazione can -\ntonale oppure\
\ presso la sua agenzia.\nSe risiede all’estero, voglia consultare la rubrica\
\ «Richiedere una rendita per \nsuperstiti» sul sito Internet della Cassa svizzera\
\ di compensazione (CSC): \nwww.cdc.admin.ch\f8Calcolo delle rendite per superstiti\n\
10 Come si calcolano le rendite per superstiti?\nGli elementi di calcolo delle\
\ rendite per superstiti sono:\n• gli anni di contribuzione,\n• i redditi da\
\ attività lucrativa e\n• gli accrediti per compiti educativi o assistenziali\
\ della persona deceduta.\nPer il calcolo degli anni di contribuzione ai fini\
\ della rendita per vedovi e \nla rendita per orfani in seguito al decesso della\
\ (ex) moglie o della madre \nvale quanto segue: gli anni di matrimonio trascorsi\
\ prima del 31 dicembre \n1996 (durante i quali la moglie era assicurata, ma non\
\ tenuta a versare i \ncontributi) sono conteggiati come anni di contribuzione.\n\
11 Quando si percepisce una rendita completa?\nI superstiti percepiscono una rendita\
\ completa (scala delle rendite 44) se \nla persona deceduta ha versato contributi\
\ per l’intera durata contributiva, \nossia dal 1° gennaio dell’anno successivo\
\ al compimento dei 20 anni fino \nal decesso.\n12 Quando si percepisce una rendita\
\ parziale?\nIn caso di durata di contribuzione incompleta, vale a dire se la\
\ persona \ndeceduta non conta lo stesso numero di anni interi di contribuzione\
\ della \nsua classe di età, è versata una rendita parziale (scala delle rendite\
\ 1-43). \nLa rendita parziale è calcolata secondo il rapporto esistente tra gli\
\ anni di \ncontribuzione effettivi della persona deceduta e la durata di contribuzione\
\ \ncompleta.\n13 A chi vengono conteggiati i cosiddetti anni giovanili?\nGli\
\ anni di gioventù sono i periodi di contribuzione compiuti dai 18 ai 20 \nanni.\
\ Se la persona deceduta ha compiuto periodi di contribuzione fino \nai 20 anni,\
\ questi le vengono conteggiati come anni giovanili per colmare \neventuali lacune\
\ contributive successive.\f914 I periodi di contribuzione compiuti dopo l’età\
\ di \nriferimento vengono conteggiati?\nSe la persona deceduta ha continuato\
\ a lavorare dopo l’età di riferimento, \na determinate condizioni i periodi di\
\ contribuzione in questione possono \nessere conteggiati per colmare eventuali\
\ lacune contributive o aumentare \nla rendita grazie al computo dei redditi supplementari\
\ dell’attività lucrativa. \nDopo il raggiungimento dell’età di riferimento, un\
\ nuovo calcolo della ren -\ndita può essere richiesto soltanto una volta. Se\
\ la persona deceduta non \nha chiesto un nuovo calcolo della rendita di vecchiaia,\
\ i superstiti possono \npresentare una domanda in tal senso per la rendita per\
\ superstiti che la \nsostituirà. \nPer ulteriori informazioni si veda l’opuscolo\
\ informativo 3.08 – Nuovo cal -\ncolo della rendita di vecchiaia dopo l’età di\
\ riferimento .\n15 Com’è composto il reddito annuo medio?\nIl reddito annuo medio\
\ è composto:\n• dalla media dei redditi da attività lucrativa,\n• dalla media\
\ degli accrediti per compiti educativi e\n• dalla media degli accrediti per\
\ compiti assistenziali.\nMedia dei redditi da attività lucrativa\n16 Come si\
\ calcola la media dei redditi da attività lucrativa?\nLe rendite per superstiti\
\ sono calcolate sulla base dei redditi da attività \nlucrativa conseguiti dalla\
\ persona deceduta.\nPer calcolare la media dei redditi da attività lucrativa\
\ vengono sommati tutti \ni redditi realizzati fino al 31 dicembre dell’anno precedente\
\ l’insorgenza \ndell’evento assicurato. I redditi conseguiti negli anni giovanili\
\ sono presi \nin considerazione solo se servono a colmare lacune contributive\
\ sorte più \ntardi.\nI redditi da attività lucrativa sono registrati sui cosiddetti\
\ conti individuali \n(CI) di ogni persona. \f1017 La somma dei redditi da attività\
\ lucrativa viene \nadeguata all’evoluzione dei prezzi e dei salari? Come?\n\
I redditi possono essere stati conseguiti in anni in cui il livello dei salari\
\ era \npiù basso. La somma dei redditi è perciò rivalutata in base all’evoluzione\
\ \nmedia dei prezzi e dei salari. La somma rivalutata è quindi divisa per il\
\ \nnumero degli anni e dei mesi computabili. Il risultato è la media dei redditi\
\ \nda attività lucrativa.\n18 Che cos’è il cosiddetto supplemento di carriera?\n\
Se la persona deceduta non aveva ancora compiuto 45 anni al momento \ndel decesso,\
\ la media del reddito da attività lucrativa è aumentata di un \nsupplemento percentuale\
\ (supplemento di carriera) in funzione dell’età.\nIn caso di decesso Percentuale\n\
dopo il compimento \ndei… anniprima del compimento \ndei… anni\n23 100\n23 24\
\ 90\n24 25 80\n25 26 70\n26 27 60\n27 28 50\n28 30 40\n30 32 30\n32 35 20\n35\
\ 39 10\n39 45 5\f11Media degli accrediti per compiti educativi e \nassistenziali\n\
19 Che cosa sono gli accrediti per compiti educativi?\nNel calcolo della rendita\
\ per superstiti, si può attribuire a una persona \ndeceduta un accredito per\
\ compiti educativi per ogni anno in cui si è occu -\npata di figli di età inferiore\
\ ai 16 anni. Questo accredito ammonta al triplo \ndella rendita minima annua.\
\ Per le persone coniugate, l’accredito è diviso a \nmetà durante gli anni civili\
\ di matrimonio. Tuttavia, la ripartizione interessa \nunicamente gli accrediti\
\ acquisiti durante il periodo tra il 1° gennaio che \nsegue il compimento dei\
\ 20 anni e il 31 dicembre che precede il raggiun -\ngimento dell’età di riferimento\
\ da parte del coniuge più anziano. La media \ndegli accrediti per compiti educativi\
\ si ottiene dividendo la somma degli \nstessi per il periodo di contribuzione\
\ complessivo. \nNel caso dei genitori divorziati o non coniugati che esercitano\
\ l’autorità pa -\nrentale congiunta, gli accrediti per compiti educativi vengono\
\ conteggiati, \ninteramente a uno dei genitori o per metà a ciascuno dei due,\
\ in applica -\nzione della decisione del tribunale o dell’autorità di protezione\
\ dei minori e \ndegli adulti (APMA) o sulla base della convenzione parentale.\
\ \nAl riguardo, si rimanda alle indicazioni dettagliate dell’opuscolo informa\
\ -\ntivo 1.07 – Accrediti per compiti educativi .\n20 Che cosa sono gli accrediti\
\ per compiti assistenziali?\nAlle persone decedute possono essere conteggiati\
\ accrediti per compiti assi -\nstenziali per gli anni in cui esse hanno assistito\
\ parenti al beneficio di assegni \nper grandi invalidi che potevano essere facilmente\
\ raggiungibili. Sono parifi -\ncati ai parenti i partner che convivono con gli\
\ assicurati nella medesima eco -\nnomia domestica ininterrottamente da almeno\
\ cinque anni. Per gli anni in cui \nsi possono conteggiare accrediti per compiti\
\ educativi non vi è diritto ad ac -\ncrediti per compiti assistenziali. L’importo\
\ dell’accredito per compiti assisten -\nziali ammonta al triplo della rendita\
\ minima annua. Per le persone coniugate \nl’accredito è diviso a metà durante\
\ gli anni civili di matrimonio. Tuttavia, la \nripartizione interessa unicamente\
\ gli accrediti acquisiti durante il periodo \ntra il 1° gennaio che segue il\
\ compimento dei 20 anni e il 31 dicembre che \nprecede il raggiungimento dell’età\
\ di riferimento da parte del coniuge più \nanziano. Si ottiene la media degli\
\ accrediti per compiti assistenziali divi -\ndendo la somma degli stessi per\
\ il periodo di contribuzione complessivo. \f12La richiesta d’iscrizione di accrediti\
\ per compiti assistenziali deve essere \npresentata ogni anno per l’anno precedente\
\ alla cassa cantonale di com -\npensazione del luogo di domicilio della persona\
\ assistita. A tale scopo va \nutilizzato il modulo 318.270 – Richiesta d’iscrizione\
\ di accrediti per compiti \nassistenziali.\nAl riguardo, si rimanda alle indicazioni\
\ dettagliate dell’opuscolo informa -\ntivo 1.03 – Accrediti per compiti assistenziali\
\ .\nImporto delle rendite\n21 Quali sono gli importi attuali delle rendite?\n\
In caso di durata completa di contribuzione, le rendite complete ordinarie \n\
ammontano, a seconda del reddito medio, a:\nminimo massimo\nCHF/mese CHF/mese\n\
Rendita per vedove e vedovi 1 008.– 2 016.–\nRendita per orfani 504.– 1 008.–\n\
Se, per lo stesso figlio, sono concesse due rendite per orfani oppure una \nrendita\
\ per orfani e una rendita per figli, la somma delle due rendite non \ndeve superare\
\ l’importo di 1 512 franchi, ossia il 60 % dell’importo mas-\nsimo della rendita\
\ di vecchiaia.\nPrestazioni complementari\n22 Chi ha diritto a prestazioni complementari?\n\
Le vedove, i vedovi e gli orfani di modeste condizioni economiche hanno \ndiritto,\
\ a certe condizioni, a prestazioni complementari. Al riguardo, si ri -\nmanda\
\ alle indicazioni dettagliate degli opuscoli informativi 5.01 – Presta -\nzioni\
\ complementari all’AVS e all’AI e 5.02 – Diritto a prestazioni comple -\nmentari\
\ all’AVS e all’AI .\nSe risiede all’estero, non ha diritto alle prestazioni complementari\
\ all’AVS \ne all’AI.\f13Esempio di calcolo\n23 Decesso del marito o del padre\n\
Un assicurato nato nel giugno 1975 muore nel marzo 2025. Lascia la \nmoglie e\
\ due figli, nati nel 2007 e nel 2008. Sono quindi computabili \ncompiti educativi\
\ per 17 anni. Dal 1° aprile 2025 sono versate una rendita \nvedovile e due rendite\
\ per orfani. Dal 1996 fino alla sua morte, il defunto \nha pagato ininterrottamente\
\ i contributi AVS; ai suoi superstiti sono per -\ntanto concesse rendite complete\
\ ( scala delle rendite 44 ).\nLa media dei redditi da attività lucrativa è calcolata\
\ come segue, \nsulla base dei conti individuali:\nSomma dei redditi conseguiti\
\ durante 29 anni \ndi contribuzione, dal 1996 al 2024 CHF 1 600 000.–\nLa somma\
\ rivalutata divisa per la durata di \ncontribuzione determinante (29 anni) dà\
\ una \nmedia dei redditi dell’attività lucrativa di CHF 55 172.–\nLa media degli\
\ accrediti per compiti educativi viene calcolata \ncome segue:\nNumero di anni\
\ x il triplo della rendita \nminima annua ÷ durata di contribuzione ÷ 2 \n\
17 x 45 360 franchi ÷ 29 ÷ 2 CHF 13 295.–\nCalcolo del reddito annuo medio e delle\
\ rendite:\nMedia dei redditi dell’attività lucrativa CHF 55 172.–\nMedia degli\
\ accrediti per compiti educativi CHF 13 295.–\nReddito annuo medio (arrotondato\
\ per eccesso \nal valore successivo delle tabelle allegata, Scala 44: \nrendite\
\ complete mensili ) di CHF 69 552.–\nCome risulta dalla tabella allegata, \n\
gli importi delle rendite sono i seguenti: \nrendita per vedove CHF 1 790.–\n\
due rendite per orfani, ciascuna CHF 895.–\nAllegato\n• Tabella per le rendite\
\ complete (scala delle rendite 44)\n• Tabella dei fattori di rivalutazione\f\
14Rendite AVS/AI dal 1° gennaio 2025\nScala 44: rendite complete mensili \
\ Importi in franchi\nBase di calcolo Rendita di \nvecchiaia e\
\ \nd’invalidità Rendita di \nvecchiaia e \nd’invalidità \nper vedove/\nvedoviRendite\
\ per i superstiti\nReddito annuo \nmedio determi -\nnanteVedove/ \nvedoviRendita\
\ \ncomple -\ntivaRendita per \norfani e per \nfigliRendita per \norfani \n60\
\ %*\n1/1 1/1 1/1 1/1\n fino a 15 120 1 260 1 512 1 008 378 504 756\n16 632\
\ 1 293 1 551 1 034 388 517 776\n18 144 1 326 1 591 1 060 398 530 795\n\
19 656 1 358 1 630 1 087 407 543 815\n21 168 1 391 1 669 1 113 417 556 \
\ 835\n22 680 1 424 1 709 1 139 427 570 854\n24 192 1 457 1 748 1 165 437\
\ 583 874\n25 704 1 489 1 787 1 191 447 596 894\n27 216 1 522 1 826 1\
\ 218 457 609 913\n28 728 1 555 1 866 1 244 466 622 933\n30 240 1 588\
\ 1 905 1 270 476 635 953\n31 752 1 620 1 944 1 296 486 648 972\n33 264\
\ 1 653 1 984 1 322 496 661 992\n34 776 1 686 2 023 1 349 506 674 1 011\n\
36 288 1 719 2 062 1 375 516 687 1 031\n37 800 1 751 2 102 1 401 525 701 1\
\ 051\n39 312 1 784 2 141 1 427 535 714 1 070\n40 824 1 817 2 180 1 454 545\
\ 727 1 090\n42 336 1 850 2 220 1 480 555 740 1 110\n43 848 1 882 2 259 1\
\ 506 565 753 1 129\n45 360 1 915 2 298 1 532 575 766 1 149\n46 872 1 935\
\ 2 322 1 548 581 774 1 161\n48 384 1 956 2 347 1 564 587 782 1 173\n49 896\
\ 1 976 2 371 1 580 593 790 1 185\n51 408 1 996 2 395 1 597 599 798 1 197\n\
52 920 2 016 2 419 1 613 605 806 1 210\n54 432 2 036 2 443 1 629 611 814 1\
\ 222\n55 944 2 056 2 468 1 645 617 823 1 234\n57 456 2 076 2 492 1 661 623\
\ 831 1 246\n58 968 2 097 2 516 1 677 629 839 1 258\n60 480 2 117 2 520 1\
\ 693 635 847 1 270\n61 992 2 137 2 520 1 710 641 855 1 282\n63 504 2 157\
\ 2 520 1 726 647 863 1 294\n65 016 2 177 2 520 1 742 653 871 1 306\n66 528\
\ 2 197 2 520 1 758 659 879 1 318\n68 040 2 218 2 520 1 774 665 887 1 331\n\
69 552 2 238 2 520 1 790 671 895 1 343\n71 064 2 258 2 520 1 806 677 903 1\
\ 355\n72 576 2 278 2 520 1 822 683 911 1 367\n74 088 2 298 2 520 1 839 689\
\ 919 1 379\n75 600 2 318 2 520 1 855 696 927 1 391\n77 112 2 339 2 520 1\
\ 871 702 935 1 403\n78 624 2 359 2 520 1 887 708 943 1 415\n80 136 2 379\
\ 25 20 1 903 714 952 1 427\n81 648 2 399 2 520 1 919 720 960 1 439\n83 160\
\ 2 419 2 520 1 935 726 968 1 452\n84 672 2 439 2 520 1 951 732 976 1 464\n\
86 184 2 460 2 520 1 968 738 984 1 476\n87 696 2 480 2 520 1 984 744 992 1\
\ 488\n89 208 2 500 2 520 2 000 750 1 000 1 500\n 90 720 e più 2 520\
\ 2 520 2 016 756 1 008 1 512\n* Gli importi valgono anche per le rendite doppie\
\ per orfani e per le rendite intere doppie per figli previste dal diritto \n\
previgente.\f15Fattori forfetari di rivalutazione, calcolati in funzione dell’en\
\ -\ntrata nell’assicurazione: insorgenza del caso d’assicurazione \nnel 2025\n\
Prima registra- \nzione nel CI*Fattore di \nrivalutazionePrima registra- \n\
zione nel CI*Fattore di \nrivalutazione\n1976 1,110 2001 1,000\n1977 1,098 2002\
\ 1,000\n1978 1,086 2003 1,000\n1979 1,075 2004 1,000\n1980 1, 063 2005 1,000\n\
1981 1,052 2006 1,000\n1982 1,042 2007 1,000\n1983 1,032 2008 1,000\n1984 1,022\
\ 2009 1,000\n1985 1,013 2010 1,000\n1986 1,004 2011 1,000\n1987 1,000 2012 1,000\n\
1988 1,000 2013 1,000\n1989 1,000 2014 1,000\n1990 1,000 2015 1,000\n1991 1,000\
\ 2016 1,000\n1992 1,000 2017 1,000\n1993 1,000 2018 1,000\n1994 1,000 2019 1,000\n\
1995 1,000 2020 1,000\n1996 1,000 2021 1,000\n1997 1,000 2022 1,000\n1998 1,000\
\ 2023 1,000\n1999 1,000 2024 1,000\n2000 1,000\n* La prima registrazione determinante\
\ nel CI, cha va presa in considerazione per \nil calcolo della rendita, può risalire\
\ al più presto all’anno civile del compimento dei \n21 anni.\f16Chiarimenti e\
\ altre \ninformazioni\nQuesto opuscolo informativo presenta solo una panoramica\
\ riassun -\ntiva. Per la valutazione dei singoli casi fanno stato esclusivamente\
\ le \ndisposizioni legali in vigore. Per ulteriori informazioni ci si può rivolgere\
\ \nalle casse di compensazione o alle loro agenzie. L’elenco delle casse di \n\
compensazione è pubblicato all’indirizzo Internet www.avs-ai.ch .\nI termini relativi\
\ allo stato civile hanno anche il significato seguente:\n• matrimonio: unione\
\ domestica registrata,\n• divorzio: scioglimento giudiziale dell’unione domestica\
\ registrata,\n• decesso del coniuge: decesso del partner registrato.\nPubblicato\
\ dal Centro d’informazione AVS/AI in collaborazione con \nl’Ufficio federale\
\ delle assicurazioni sociali.\nEdizione novembre 2024. La riproduzione, anche\
\ solo parziale, è \nautorizzata soltanto con il consenso scritto del Centro\
\ d’informazione \nAVS/AI.\nQuesto opuscolo informativo può essere richiesto\
\ alle casse di compen -\nsazione, alle loro agenzie e agli uffici AI. Numero\
\ di ordinazione 3.03/i. \nÈ disponibile anche su www.avs-ai.ch .\n Ulteriori\
\ informazioni, pubblicazioni e video esplicativi.\n3.03-25/01-I"
- "1.05Stand am 1. Januar 2015 \nEtat au 1er janvier 2015 \nStato al 1° gennaio\
\ 2015\nSeiten \n3-5Erläuterungen \nzur Kontenübersicht\nPages \n6-8Explications\
\ \nconcernant l’aperçu \ndes comptes\nPagine \n9-11Spiegazioni relative \n\
alla ricapitolazione \ndei conti\f2\f3Erläuterungen zur Kontenübersicht\nAuf\
\ einen Blick \nSie und Ihr Ehegatte oder Ihre Ehegattin erhalten im Scheidungsfall\
\ nach \nAbschluss des Splitting-Verfahrens eine Kontenübersicht. Sind Sie oder\
\ Ihr \nEhegatte oder Ihre Ehegattin bereits rentenberechtigt, erhalten Sie anstelle\
\ \nder Kontenübersicht eine neue anfechtbare Rentenverfügung.\nDie Kontenübersicht\
\ bietet einen Überblick über sämtliche Einkommen, die \nseit Beginn der Beitragspflicht\
\ bei der AHV/IV in Ihren Individuellen Konten \n(IK) eingetragen wurden. Diese\
\ Eintragungen bilden die Grundlage für die \nspätere Rentenberechnung.\nDie Einkommen,\
\ die Sie und Ihr Ehegatte oder Ihre Ehegattin während der \nEhejahre erzielt\
\ haben, werden aufgeteilt und Ihnen je zur Hälfte gutge -\nschrieben. Sie sind\
\ in der Übersicht als geteilte Einkommen gekennzeich -\nnet. Betreuungsgutschriften\
\ werden separat aufgeführt, während Erzie -\nhungsgutschriften erst im Rentenfall\
\ zum Einkommen hinzugezählt und in \ndie Berechnung miteinbezogen werden.\nFalls\
\ im gleichen Jahr mehrere Ausgleichskassen ein Individuelles Konto für \nSie\
\ führten (weil Sie bei mehreren Arbeitgebern tätig waren), werden die \nEinkommen\
\ für das betreffende Jahr zusammengezählt.\f4Allfällige Korrekturen und Beitragslücken\n\
1 Wie erfahre ich allfällige Änderungen?\nDie Ausgleichskassen nehmen Korrekturen,\
\ die nach Abschluss des Ver -\nfahrens notwendig werden, für Sie und Ihren Ehegatten\
\ oder Ihre Ehe -\ngattin automatisch vor. Die eingetragenen Einkommen können\
\ sich \ndaher unter Umständen noch ändern. Die Ausgleichskassen orientieren\
\ nicht \nautomatisch über diese Änderungen. Wir empfehlen Ihnen daher drin -\n\
gend, nach etwa drei Jahren noch einmal einen detaillierten Auszug aus \ndem IK\
\ zu verlangen.\n2 Was ist, wenn ich während der Ehe nicht \nerwerbstätig war?\n\
Wenn Sie während der Ehe nicht erwerbstätig waren und deshalb keine \nBeiträge\
\ entrichtet haben, können Sie sich zur Abklärung der Beitrags -\npflicht an die\
\ kantonale Ausgleichskasse wenden.\nVerfahren\n3 Wo kann ich den Kontoauszug\
\ verlangen?\nSie können den Kontoauszug\n• bei den kontoführenden Ausgleichskassen\
\ verlangen, oder\n• irgendeine Ausgleichskasse damit beauftragen, für Sie sämtliche\
\ Kon -\ntoauszüge zu beschaffen.\nDer Kontoauszug ist kostenlos. \nBeanstandung\
\ der Eintragung\n4 Kann ich eine Berichtigung der Eintragungen \nverlangen?\
\ \nJa. Sie können innert 30 Tagen nach der Zustellung des Kontoauszugs bei \n\
der Ausgleichskasse, die das beanstandete Konto führt, eine Berichtigung \nverlangen,\
\ wenn Sie die Richtigkeit der Einträge nicht anerkennen. Den \nEntscheid über\
\ das Berichtigungsbegehren fällt die Ausgleichskasse in \nForm einer Kassenverfügung.\
\ \f5Auskünfte und weitere \nInformationen\nDieses Merkblatt vermittelt nur eine\
\ Übersicht. Für die Beurteilung \nvon Einzelfällen sind ausschliesslich die gesetzlichen\
\ Bestimmungen \nmassgebend. Die Ausgleichskassen und ihre Zweigstellen geben\
\ gerne \nAuskunft. Ein Verzeichnis aller Ausgleichskassen finden Sie unter \n\
www.ahv-iv.ch .\nDie Zivilstandsbezeichnungen haben auch die folgende Bedeutung:\
\ \n• Ehe/Heirat: eingetragene Partnerschaft\n• Scheidung: gerichtliche Auflösung\
\ der Partnerschaft\n• Verwitwung: Tod des eingetragenen Partners / der eingetragenen\
\ \nPartnerin\nHerausgegeben von der Informationsstelle AHV/IV in Zusammenarbeit\
\ \nmit dem Bundesamt für Sozialversicherungen.\nNachdruck Oktober 2021. Auch\
\ auszugsweiser Abdruck ist nur mit \nschriftlicher Einwilligung der Informationsstelle\
\ AHV/IV erlaubt. \nDieses Merkblatt kann bei den Ausgleichskassen und deren Zweig-\
\ \nstellen sowie den IV-Stellen bezogen werden. Bestellnummer 1.05. Es \nist\
\ ebenfalls unter www.ahv-iv.ch verfügbar.\f6Explications concernant l’aperçu\
\ \ndes comptes\nEn bref\nEn cas de divorce, vous et votre ex-conjoint recevez\
\ un aperçu des comptes \ndès que la procédure de partage des revenus est achevée.\
\ Si l’un de vous \ntouche déjà une rente, vous recevrez, en lieu et place de\
\ l’aperçu des comp -\ntes, une nouvelle décision de rente susceptible de recours.\n\
L’aperçu des comptes vous donne une vue d’ensemble des revenus inscrits \ndans\
\ vos Comptes Individuels (CI) depuis que vous êtes soumis/e à l’ob -\nligation\
\ de cotiser à l’AVS/AI. Ces inscriptions forment la base du calcul \nultérieur\
\ des rentes.\nLes revenus réalisés par vous et votre ex-conjoint pendant les\
\ années de \nmariage sont divisés et attribués par moitié à chacun. Dans l’aperçu\
\ des \ncomptes, ils sont désignés comme revenus partagés. Les bonifications pour\
\ \ntâches d’assistance sont mentionnées séparément, tandis que les bonifica -\n\
tions pour tâches éducatives qui peuvent entrer dans le calcul d’une rente \n\
ne sont prises en considération qu’au moment du calcul de la rente.\nSi, pour\
\ une année, plusieurs caisses de compensation tenaient un Compte \nIndividuel\
\ pour vous (parce que vous aviez plusieurs employeurs), les reve -\nnus de cette\
\ année-là sont additionnés.\f7Corrections éventuelles et lacunes de cotisations\n\
1 Comment puis-je avoir connaissance des modifications ?\nUne fois la procédure\
\ terminée, les caisses de compensation effectuent au -\ntomatiquement les corrections\
\ qui s’avèrent nécessaires sur votre compte \net sur celui de votre ex-conjoint.\
\ Il se peut donc que les revenus inscrits \nsoient encore modifiés. Mais les\
\ caisses de compensation ne signalent pas \nd’office ces modifications-là. Nous\
\ vous recommandons donc instamment \nde demander un nouvel extrait détaillé des\
\ CI après trois ans environ.\n2 Qu’en est-il si je n’ai pas exercé d’activité\
\ lucrative \npendant le mariage ?\nSi vous n’avez pas exercé d’activité lucrative\
\ pendant vos années de ma-\nriage et que vous n’avez jamais dû cotiser, vous\
\ pouvez vous adresser à la \ncaisse cantonale de compensation pour régler la\
\ question de votre obliga -\ntion de cotiser.\nProcédure\n3 Où puis-je demander\
\ l’extrait de compte ?\nVous pouvez\n• demander un extrait de compte auprès de\
\ toute caisse de compensa -\ntion tenant pour vous un compte, ou\n• charger n’importe\
\ quelle caisse de compensation de vous procurer \ntous les extraits de compte\
\ vous concernant.\nL’extrait de compte est gratuit. \nContestation des inscriptions\n\
4 Puis-je exiger une rectification des inscriptions? \nOui. Vous pouvez, dans\
\ les 30 jours suivant la remise de l’extrait de compte, \ndemander une rectification\
\ à la caisse de compensation qui tient le compte \nconcerné si vous contestez\
\ l’exactitude des inscriptions qui y sont faites. \nLa caisse de compensation\
\ se prononcera sur votre demande par voie de \ndécision.\f8\nRenseignements et\
\ autres \ninformations\nCe mémento ne fournit qu’un aperçu général. Pour le\
\ règlement des \ncas individuels, seules les dispositions légales font foi. Les\
\ caisses de \ncompensation, leurs agences et les offices AI fournissent volontiers\
\ \ntous les renseignements utiles. Vous trouverez la liste complète des \ncaisses\
\ de compensation sur le site www.avs-ai.ch .\nLes désignations d’état civil utilisées\
\ ici ont également les significations \nsuivantes : \n• mariage : partenariat\
\ enregistré ;\n• divorce : dissolution judiciaire du partenariat enregistré ;\n\
• décès du conjoint : décès du partenaire enregistré.\nPublié par le Centre d’information\
\ AVS/AI en collaboration avec l’Of -\nfice fédéral des assurances sociales.\n\
Réimpression octrobre 2021. Toute reproduction, même partielle, n’est \nautorisée\
\ qu’avec l’accord écrit du Centre d’information AVS/AI. \nCe mémento peut être\
\ obtenu auprès des caisses de compensation et \nde leurs agences ainsi qu’auprès\
\ des offices AI. Numéro de commande \n1.05. Il est également disponible sous\
\ www.avs-ai.ch .\f9Spiegazioni relative alla \nricapitolazione dei conti\nIn\
\ breve\nDopo che è stata conclusa la procedura di splitting in caso di divorzio,\
\ \nentrambi gli ex coniugi ricevono una ricapitolazione dei conti. Se uno di\
\ \nloro ha già diritto a una rendita, al posto della ricapitolazione riceverà\
\ una \nnuova decisione impugnabile relativa alla rendita.\nLa ricapitolazione\
\ dei conti fornisce una visione complessiva dell’insieme \ndei redditi che dall’inizio\
\ dell’obbligo di contribuzione sono stati registrati \npresso l’AVS/AI sui Conti\
\ Individuali (CI) della persona interessata. Queste \nregistrazioni costituiscono\
\ la base per il successivo calcolo delle rendite.\nLa somma dei redditi percepiti\
\ dai coniugi durante gli anni di matrimo -\nnio viene divisa in due e accreditata\
\ a ciascuno per metà. Essi sono cont -\nrassegnati sulla ricapitolazione come\
\ redditi divisi. Gli accrediti per compiti \nassistenziali sono registrati separatamente,\
\ mentre gli accrediti per compiti \neducativi vengono aggiunti al reddito solo\
\ in caso di rendita e quindi con -\nsiderati nel calcolo.\nQualora nello stesso\
\ anno più di una cassa di compensazione abbia tenuto \nun Conto Individuale per\
\ la stessa persona (perché quest’ultima ha eser -\ncitato un’attività lucrativa\
\ presso diversi datori di lavoro), i redditi dell’anno \nin questione vengono\
\ addizionati.\n \f10Correzioni e lacune contributive eventuali\n1 Come si può\
\ sapere se vi sono state modifiche nel CI?\nLe casse di compensazione eseguono\
\ automaticamente per entrambi i \nconiugi le correzioni necessarie una volta\
\ conclusa la procedura. I redditi \nregistrati possono quindi ancora cambiare\
\ a seconda delle circostanze. Le \ncasse di compensazione non forniscono automaticamente\
\ informazioni \nsu queste modifiche. È quindi fortemente auspicabile richiedere\
\ un nuovo \nestratto dettagliato del CI dopo circa tre anni.\n2 A chi devono\
\ rivolgersi le persone che durante il \nmatrimonio non esercitavano un’attività\
\ lucrativa \nper verificare il loro obbligo di contribuzione?\nPer verificare\
\ il loro attuale obbligo di contribuzione le persone che durante \nil matrimonio\
\ non esercitavano un’attività lucrativa e non hanno dovuto \nversare contributi\
\ possono rivolgersi alla cassa di compensazione canto -\nnale. \nProcedura\n\
3 Dove può essere richiesto un estratto del Conto \nIndividuale?\nL’assicurato\
\ può\n• richiedere un estratto conto presso ciascuna delle casse di compensa\
\ -\nzione che tengono un conto a suo nome oppure\n• incaricare qualsiasi cassa\
\ di compensazione di farglieli pervenire tutti.\nGli estratti conto sono gratuiti.\n\
Contestazione della registrazione\n4 È possibile chiedere una rettifica delle\
\ registrazioni?\nSì. Chi contesta l’esattezza delle registrazioni può chiedere\
\ una rettifica \nentro i 30 giorni seguenti la notifica dell’estratto conto contestato\
\ alla \ncassa di compensazione che gestisce il conto interessato, che emanerà\
\ una \ndecisione in merito.\f11Chiarimenti e altre \ninformazioni\nQuesto opuscolo\
\ informativo presenta solo una panoramica riassun -\ntiva. Per la valutazione\
\ dei singoli casi fanno stato esclusivamente le \ndisposizioni legali in vigore.\
\ Per ulteriori informazioni ci si può rivolgere \nalle casse di compensazione\
\ o alle loro agenzie. L’elenco delle casse di \ncompensazione è pubblicato all’indirizzo\
\ Internet www.avs-ai.ch .\nI termini relativi allo stato civile hanno anche il\
\ significato seguente:\n• matrimonio: unione domestica registrata,\n• divorzio:\
\ scioglimento giudiziale dell’unione domestica registrata,\n• decesso del coniuge:\
\ decesso del partner registrato.\nPubblicato dal Centro d’informazione AVS/AI\
\ in collaborazione con \nl’Ufficio federale delle assicurazioni sociali.\nRistampa\
\ ottobre 2021. La riproduzione, anche solo parziale, è au -\ntorizzata soltanto\
\ con il consenso scritto del Centro d’informazione \nAVS/AI.\nQuesto opuscolo\
\ informativo può essere richiesto alle casse di com -\npensazione, alle loro\
\ agenzie e agli uffici AI. Numero di ordinazione 1.05. \nÈ disponibile anche\
\ su www.avs-ai.ch .\n1.05-15/01-M"
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy@1
- cosine_precision@1
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@1
- cosine_ndcg@5
- cosine_ndcg@10
- cosine_mrr@1
- cosine_mrr@5
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on nomic-ai/modernbert-embed-base
results:
- task:
type: triplet
name: Triplet
dataset:
name: tri acc
type: tri-acc
metrics:
- type: cosine_accuracy
value: 0.1111111119389534
name: Cosine Accuracy
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: ir metrics
type: ir-metrics
metrics:
- type: cosine_accuracy@1
value: 0.013888888888888888
name: Cosine Accuracy@1
- type: cosine_precision@1
value: 0.013888888888888888
name: Cosine Precision@1
- type: cosine_precision@5
value: 0.019444444444444445
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.015277777777777779
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.013888888888888888
name: Cosine Recall@1
- type: cosine_recall@5
value: 0.09722222222222222
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.1527777777777778
name: Cosine Recall@10
- type: cosine_ndcg@1
value: 0.013888888888888888
name: Cosine Ndcg@1
- type: cosine_ndcg@5
value: 0.05847664758364316
name: Cosine Ndcg@5
- type: cosine_ndcg@10
value: 0.07661602804046352
name: Cosine Ndcg@10
- type: cosine_mrr@1
value: 0.013888888888888888
name: Cosine Mrr@1
- type: cosine_mrr@5
value: 0.04560185185185185
name: Cosine Mrr@5
- type: cosine_mrr@10
value: 0.05318011463844797
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.08175342147452774
name: Cosine Map@100
---
# SentenceTransformer based on nomic-ai/modernbert-embed-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) <!-- at revision d556a88e332558790b210f7bdbe87da2fa94a8d8 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("xixixi0503/modernbert-biencoder")
# Run inference
sentences = [
'Quali sono i requisiti per ottenere una rendita vedovile per le donne sposate?',
'3.03 Prestazioni dell‘AVS\nRendite per superstiti\ndell’AVS\nStato al 1° gennaio 2025\x0c2In breve\nLe rendite per superstiti hanno lo scopo di evitare che, al decesso del \nconiuge, di uno o di entrambi i genitori, i superstiti vengano a trovarsi in \ngravi difficoltà finanziarie. Vi sono tre categorie di rendite per superstiti: \n• le rendite per vedove,\n• le rendite per vedovi,\n• le rendite per orfani.\nAffinché una persona abbia diritto a una rendita per superstiti, è necessario \nche alla persona deceduta si possa conteggiare almeno un anno di contri -\nbuzione completo.\nSi parla di anno di contribuzione completo quando:\n• la persona deceduta ha versato contributi complessivamente per un \nanno, oppure\n• la persona deceduta era assicurata e il coniuge ha versato il doppio del \ncontributo minimo almeno per un anno, oppure\n• alla persona deceduta si possono conteggiare accrediti per compiti \neducativi o assistenziali.\x0c3Rendite per vedove\n1 Quali sono i requisiti che devono soddisfare le donne \nsposate per avere diritto alla rendita vedovile?\nLe donne sposate il cui marito o la cui moglie è deceduto/a hanno diritto a \nuna rendita vedovile se all’insorgere della vedovanza:\n• hanno uno o più figli (di qualsiasi età). Sono considerati come figli an -\nche i figli del coniuge deceduto che vivono nell’economia domestica \ncomune e, in seguito alla sua morte, hanno diritto a una rendita per \norfani. Lo stesso vale per gli affiliati precedentemente affidati alle cure \ndei coniugi, a condizione che siano in seguito adottati dalla vedova. \nÈ considerata vedova con figli anche la moglie della madre, se al mo -\nmento della nascita del figlio era sposata con la madre e se il figlio è \nstato concepito secondo le disposizioni della legge del 18 dicembre \n1998 sulla medicina della procreazione, e quindi sussiste un rapporto \ndi filiazione (art. 255a cpv. 1 CC), o\n• hanno compiuto 45 anni e sono state sposate per almeno cinque anni. \nSe hanno contratto più matrimoni, si tiene conto della durata comples -\nsiva dei diversi matrimoni. Per le coppie di persone dello stesso sesso \nche hanno convertito l’unione domestica registrata in matrimonio la \ndurata di quest’ultima viene aggiunta agli anni di matrimonio.\n \x0c42 Quali sono i requisiti che devono soddisfare le donne \ndivorziate per avere diritto alla rendita vedovile?\nLe donne divorziate il cui ex marito o la cui ex moglie è deceduto/a hanno \ndiritto a una rendita vedovile:\n• se hanno figli e il matrimonio è durato almeno dieci anni,\n• se il divorzio è intervenuto dopo che esse hanno compiuto 45 anni e il \nmatrimonio è durato almeno dieci anni,\n• se il figlio più giovane ha compiuto 18 anni dopo che la madre divor -\nziata ne ha compiuti 45.\nLe donne divorziate che non soddisfano alcuna di queste condizioni hanno \ndiritto a una rendita vedovile finché il figlio più giovane compie 18 anni.\nÈ considerata vedova con figli anche l’ex moglie della madre, se al mo -\nmento della nascita del figlio era sposata con la madre e se il figlio è stato \nconcepito secondo le disposizioni della legge del 18 dicembre 1998 sulla \nmedicina della procreazione, e quindi sussiste un rapporto di filiazione (art. \n255a cpv. 1 CC).\nSe l’unione domestica registrata è stata convertita in matrimonio, la sua \ndurata viene aggiunta agli anni di matrimonio.\nRendite per vedovi\n3 Quali sono i requisiti per il diritto alla rendita vedovile \ncome uomo sposato o come partner registrato?\nGli uomini sposati la cui moglie o il cui marito è deceduta/o hanno diritto \na una rendita se all’insorgere della vedovanza hanno uno o più figli (di \nqualsiasi età). Sono considerati come figli anche i figli del coniuge deceduto \nche vivono nell’economia domestica comune e, in seguito alla sua morte, \nhanno diritto a una rendita per orfani. Lo stesso vale per gli affiliati prece -\ndentemente affidati alle cure dei coniugi, a condizione che siano in seguito \nadottati dal vedovo. \nSe un partner registrato decede, il partner superstite è equiparato, a pre -\nscindere dal sesso, a un vedovo. \x0c5Nella sentenza dell’11 ottobre 2022, la Grande Camera della Corte euro -\npea dei diritti dell’uomo (Corte EDU) ha stabilito che nel caso in esame vi \nfosse una disparità di trattamento contraria alla Convenzione europea dei \ndiritti dell’uomo (CEDU), in quanto la rendita vedovile del ricorrente era \nstata soppressa quando il figlio più giovane aveva raggiunto la maggiore \netà, il che non sarebbe avvenuto per una vedova nella stessa situazione. \nLa Svizzera deve conformarsi a questa sentenza, passata in giudicato l’11 \nottobre 2022, e porre fine alla violazione del diritto constatata dalla Corte \nEDU. Le basi legali devono quindi essere adeguate tenendo conto della \nprocedura legislativa. Quest’ultima può essere relativamente lunga e si \nsvolgerà quindi solo in un secondo momento. Fino ad allora si applicherà \nuna regolamentazione transitoria per i vedovi con figli, entrata in in vigore \nl’11 ottobre 2022, secondo la quale il diritto alla rendita per vedovi non si \nestinguerà più al compimento del 18° anno d’età da parte del figlio più \ngiovane e la rendita verrà corrisposta oltre tale età. \nLa sentenza della Corte EDU non si applica né ai vedovi né ai divorziati \nsenza figli. Sulla base di questa sentenza, i vedovi senza figli continuano \na non avere diritto alla rendita vedovile e, nel caso di uomini divorziati, il \ndiritto alla stessa si estingue in ogni caso quando il figlio più giovane rag -\ngiunge la maggiore età. La sentenza della Corte EDU non si applica nem -\nmeno ai casi in cui la rendita per vedovi sia stata soppressa con decisione \npassata in giudicato prima dell’11 ottobre 2022 in seguito al compimento \ndel 18° anno d’età da parte del figlio più giovane. \n4 Quali sono i requisiti che devono soddisfare gli uomini \ndivorziati per avere diritto alla rendita vedovile? \nGli uomini divorziati la cui ex moglie o il cui ex marito è deceduta/o hanno \ndiritto a una rendita vedovile finché hanno figli di età inferiore ai 18 anni. \x0c6Rendite per orfani\n5 Quali sono i requisiti per il diritto alla rendita per \norfani?\nIn caso di decesso di uno dei genitori, l’AVS versa ai figli una rendita per \norfani. \nSe al momento della nascita del figlio la madre è sposata con una donna e \nil figlio è stato concepito secondo le disposizioni della legge del 18 dicem -\nbre 1998 sulla medicina della procreazione, la moglie della madre è consi -\nderata l’altro genitore (art. 255a cpv. 1 CC). In questi casi, alla morte della \nmoglie della madre il figlio ha diritto a una rendita per orfani.\nIn caso di decesso di entrambi i genitori, i figli hanno diritto a due rendite \nper orfani: una per ciascun genitore. Il diritto alla rendita per orfani si estin -\ngue al 18° compleanno o al termine della formazione, ma al più tardi al \n25° compleanno. Per gli affiliati vigono disposizioni particolari. I figli che \ndurante la formazione conseguono un reddito lordo dell’attività lucrativa \nsuperiore a 30 240 franchi non hanno diritto a una rendita per orfani.\nInizio e fino del diritto\n6 Quando nasce il diritto a una rendita per superstiti?\nIl diritto alla rendita per superstiti nasce il primo giorno del mese successivo \na quello del decesso del coniuge (o dell’ex coniuge) o del genitore.\n7 Quando si estingue il diritto a una rendita per \nsuperstiti?\nIl diritto alla rendita per superstiti si estingue alla fine del mese in cui le con -\ndizioni non sono più adempiute. In caso di nuove nozze cessa il diritto alla \nrendita vedovile. Il diritto alle rendite per orfani continua invece a sussistere.\x0c7Concorso con altre prestazioni\n8 Quale delle rendite viene versata?\nSe una persona adempie contemporaneamente le condizioni poste per una \nrendita per superstiti e per una rendita di vecchiaia o d’invalidità, si versa \nsolo la rendita più elevata.\nRiscossione delle rendite per superstiti\n9 Dove far valere il proprio diritto a una rendita per \nsuperstiti?\nChi intende far valere il proprio diritto alla rendita per superstiti deve rivol -\ngersi alla cassa di compensazione che, per ultima, ha incassato i contributi \ndella persona deceduta. Il modulo 318.371 – Richiesta di una rendita per \nsuperstiti può essere ottenuto presso le casse di compensazione e le loro \nagenzie o sul sito internet www.avs-ai.ch . In seguito la domanda deve es -\nsere inoltrata presso la cassa di compensazione competente.\nGli assicurati che hanno compiuto periodi assicurativi in Svizzera e in uno \no più Stati membri dell’UE o dell’AELS possono semplicemente inoltrare \nla richiesta nel Paese di domicilio: con questa richiesta prenderanno avvio \nanche le procedure necessarie in tutti gli altri Paesi interessati.\nSe la persona deceduta non ha versato contributi AVS, il diritto a rendite \nper superstiti dev’essere fatto valere presso la cassa di compensazione can -\ntonale oppure presso la sua agenzia.\nSe risiede all’estero, voglia consultare la rubrica «Richiedere una rendita per \nsuperstiti» sul sito Internet della Cassa svizzera di compensazione (CSC): \nwww.cdc.admin.ch\x0c8Calcolo delle rendite per superstiti\n10 Come si calcolano le rendite per superstiti?\nGli elementi di calcolo delle rendite per superstiti sono:\n• gli anni di contribuzione,\n• i redditi da attività lucrativa e\n• gli accrediti per compiti educativi o assistenziali della persona deceduta.\nPer il calcolo degli anni di contribuzione ai fini della rendita per vedovi e \nla rendita per orfani in seguito al decesso della (ex) moglie o della madre \nvale quanto segue: gli anni di matrimonio trascorsi prima del 31 dicembre \n1996 (durante i quali la moglie era assicurata, ma non tenuta a versare i \ncontributi) sono conteggiati come anni di contribuzione.\n11 Quando si percepisce una rendita completa?\nI superstiti percepiscono una rendita completa (scala delle rendite 44) se \nla persona deceduta ha versato contributi per l’intera durata contributiva, \nossia dal 1° gennaio dell’anno successivo al compimento dei 20 anni fino \nal decesso.\n12 Quando si percepisce una rendita parziale?\nIn caso di durata di contribuzione incompleta, vale a dire se la persona \ndeceduta non conta lo stesso numero di anni interi di contribuzione della \nsua classe di età, è versata una rendita parziale (scala delle rendite 1-43). \nLa rendita parziale è calcolata secondo il rapporto esistente tra gli anni di \ncontribuzione effettivi della persona deceduta e la durata di contribuzione \ncompleta.\n13 A chi vengono conteggiati i cosiddetti anni giovanili?\nGli anni di gioventù sono i periodi di contribuzione compiuti dai 18 ai 20 \nanni. Se la persona deceduta ha compiuto periodi di contribuzione fino \nai 20 anni, questi le vengono conteggiati come anni giovanili per colmare \neventuali lacune contributive successive.\x0c914 I periodi di contribuzione compiuti dopo l’età di \nriferimento vengono conteggiati?\nSe la persona deceduta ha continuato a lavorare dopo l’età di riferimento, \na determinate condizioni i periodi di contribuzione in questione possono \nessere conteggiati per colmare eventuali lacune contributive o aumentare \nla rendita grazie al computo dei redditi supplementari dell’attività lucrativa. \nDopo il raggiungimento dell’età di riferimento, un nuovo calcolo della ren -\ndita può essere richiesto soltanto una volta. Se la persona deceduta non \nha chiesto un nuovo calcolo della rendita di vecchiaia, i superstiti possono \npresentare una domanda in tal senso per la rendita per superstiti che la \nsostituirà. \nPer ulteriori informazioni si veda l’opuscolo informativo 3.08 – Nuovo cal -\ncolo della rendita di vecchiaia dopo l’età di riferimento .\n15 Com’è composto il reddito annuo medio?\nIl reddito annuo medio è composto:\n• dalla media dei redditi da attività lucrativa,\n• dalla media degli accrediti per compiti educativi e\n• dalla media degli accrediti per compiti assistenziali.\nMedia dei redditi da attività lucrativa\n16 Come si calcola la media dei redditi da attività lucrativa?\nLe rendite per superstiti sono calcolate sulla base dei redditi da attività \nlucrativa conseguiti dalla persona deceduta.\nPer calcolare la media dei redditi da attività lucrativa vengono sommati tutti \ni redditi realizzati fino al 31 dicembre dell’anno precedente l’insorgenza \ndell’evento assicurato. I redditi conseguiti negli anni giovanili sono presi \nin considerazione solo se servono a colmare lacune contributive sorte più \ntardi.\nI redditi da attività lucrativa sono registrati sui cosiddetti conti individuali \n(CI) di ogni persona. \x0c1017 La somma dei redditi da attività lucrativa viene \nadeguata all’evoluzione dei prezzi e dei salari? Come?\nI redditi possono essere stati conseguiti in anni in cui il livello dei salari era \npiù basso. La somma dei redditi è perciò rivalutata in base all’evoluzione \nmedia dei prezzi e dei salari. La somma rivalutata è quindi divisa per il \nnumero degli anni e dei mesi computabili. Il risultato è la media dei redditi \nda attività lucrativa.\n18 Che cos’è il cosiddetto supplemento di carriera?\nSe la persona deceduta non aveva ancora compiuto 45 anni al momento \ndel decesso, la media del reddito da attività lucrativa è aumentata di un \nsupplemento percentuale (supplemento di carriera) in funzione dell’età.\nIn caso di decesso Percentuale\ndopo il compimento \ndei… anniprima del compimento \ndei… anni\n23 100\n23 24 90\n24 25 80\n25 26 70\n26 27 60\n27 28 50\n28 30 40\n30 32 30\n32 35 20\n35 39 10\n39 45 5\x0c11Media degli accrediti per compiti educativi e \nassistenziali\n19 Che cosa sono gli accrediti per compiti educativi?\nNel calcolo della rendita per superstiti, si può attribuire a una persona \ndeceduta un accredito per compiti educativi per ogni anno in cui si è occu -\npata di figli di età inferiore ai 16 anni. Questo accredito ammonta al triplo \ndella rendita minima annua. Per le persone coniugate, l’accredito è diviso a \nmetà durante gli anni civili di matrimonio. Tuttavia, la ripartizione interessa \nunicamente gli accrediti acquisiti durante il periodo tra il 1° gennaio che \nsegue il compimento dei 20 anni e il 31 dicembre che precede il raggiun -\ngimento dell’età di riferimento da parte del coniuge più anziano. La media \ndegli accrediti per compiti educativi si ottiene dividendo la somma degli \nstessi per il periodo di contribuzione complessivo. \nNel caso dei genitori divorziati o non coniugati che esercitano l’autorità pa -\nrentale congiunta, gli accrediti per compiti educativi vengono conteggiati, \ninteramente a uno dei genitori o per metà a ciascuno dei due, in applica -\nzione della decisione del tribunale o dell’autorità di protezione dei minori e \ndegli adulti (APMA) o sulla base della convenzione parentale. \nAl riguardo, si rimanda alle indicazioni dettagliate dell’opuscolo informa -\ntivo 1.07 – Accrediti per compiti educativi .\n20 Che cosa sono gli accrediti per compiti assistenziali?\nAlle persone decedute possono essere conteggiati accrediti per compiti assi -\nstenziali per gli anni in cui esse hanno assistito parenti al beneficio di assegni \nper grandi invalidi che potevano essere facilmente raggiungibili. Sono parifi -\ncati ai parenti i partner che convivono con gli assicurati nella medesima eco -\nnomia domestica ininterrottamente da almeno cinque anni. Per gli anni in cui \nsi possono conteggiare accrediti per compiti educativi non vi è diritto ad ac -\ncrediti per compiti assistenziali. L’importo dell’accredito per compiti assisten -\nziali ammonta al triplo della rendita minima annua. Per le persone coniugate \nl’accredito è diviso a metà durante gli anni civili di matrimonio. Tuttavia, la \nripartizione interessa unicamente gli accrediti acquisiti durante il periodo \ntra il 1° gennaio che segue il compimento dei 20 anni e il 31 dicembre che \nprecede il raggiungimento dell’età di riferimento da parte del coniuge più \nanziano. Si ottiene la media degli accrediti per compiti assistenziali divi -\ndendo la somma degli stessi per il periodo di contribuzione complessivo. \x0c12La richiesta d’iscrizione di accrediti per compiti assistenziali deve essere \npresentata ogni anno per l’anno precedente alla cassa cantonale di com -\npensazione del luogo di domicilio della persona assistita. A tale scopo va \nutilizzato il modulo 318.270 – Richiesta d’iscrizione di accrediti per compiti \nassistenziali.\nAl riguardo, si rimanda alle indicazioni dettagliate dell’opuscolo informa -\ntivo 1.03 – Accrediti per compiti assistenziali .\nImporto delle rendite\n21 Quali sono gli importi attuali delle rendite?\nIn caso di durata completa di contribuzione, le rendite complete ordinarie \nammontano, a seconda del reddito medio, a:\nminimo massimo\nCHF/mese CHF/mese\nRendita per vedove e vedovi 1 008.– 2 016.–\nRendita per orfani 504.– 1 008.–\nSe, per lo stesso figlio, sono concesse due rendite per orfani oppure una \nrendita per orfani e una rendita per figli, la somma delle due rendite non \ndeve superare l’importo di 1 512 franchi, ossia il 60 % dell’importo mas-\nsimo della rendita di vecchiaia.\nPrestazioni complementari\n22 Chi ha diritto a prestazioni complementari?\nLe vedove, i vedovi e gli orfani di modeste condizioni economiche hanno \ndiritto, a certe condizioni, a prestazioni complementari. Al riguardo, si ri -\nmanda alle indicazioni dettagliate degli opuscoli informativi 5.01 – Presta -\nzioni complementari all’AVS e all’AI e 5.02 – Diritto a prestazioni comple -\nmentari all’AVS e all’AI .\nSe risiede all’estero, non ha diritto alle prestazioni complementari all’AVS \ne all’AI.\x0c13Esempio di calcolo\n23 Decesso del marito o del padre\nUn assicurato nato nel giugno 1975 muore nel marzo 2025. Lascia la \nmoglie e due figli, nati nel 2007 e nel 2008. Sono quindi computabili \ncompiti educativi per 17 anni. Dal 1° aprile 2025 sono versate una rendita \nvedovile e due rendite per orfani. Dal 1996 fino alla sua morte, il defunto \nha pagato ininterrottamente i contributi AVS; ai suoi superstiti sono per -\ntanto concesse rendite complete ( scala delle rendite 44 ).\nLa media dei redditi da attività lucrativa è calcolata come segue, \nsulla base dei conti individuali:\nSomma dei redditi conseguiti durante 29 anni \ndi contribuzione, dal 1996 al 2024 CHF 1 600 000.–\nLa somma rivalutata divisa per la durata di \ncontribuzione determinante (29 anni) dà una \nmedia dei redditi dell’attività lucrativa di CHF 55 172.–\nLa media degli accrediti per compiti educativi viene calcolata \ncome segue:\nNumero di anni x il triplo della rendita \nminima annua ÷ durata di contribuzione ÷ 2 \n17 x 45 360 franchi ÷ 29 ÷ 2 CHF 13 295.–\nCalcolo del reddito annuo medio e delle rendite:\nMedia dei redditi dell’attività lucrativa CHF 55 172.–\nMedia degli accrediti per compiti educativi CHF 13 295.–\nReddito annuo medio (arrotondato per eccesso \nal valore successivo delle tabelle allegata, Scala 44: \nrendite complete mensili ) di CHF 69 552.–\nCome risulta dalla tabella allegata, \ngli importi delle rendite sono i seguenti: \nrendita per vedove CHF 1 790.–\ndue rendite per orfani, ciascuna CHF 895.–\nAllegato\n• Tabella per le rendite complete (scala delle rendite 44)\n• Tabella dei fattori di rivalutazione\x0c14Rendite AVS/AI dal 1° gennaio 2025\nScala 44: rendite complete mensili Importi in franchi\nBase di calcolo Rendita di \nvecchiaia e \nd’invalidità Rendita di \nvecchiaia e \nd’invalidità \nper vedove/\nvedoviRendite per i superstiti\nReddito annuo \nmedio determi -\nnanteVedove/ \nvedoviRendita \ncomple -\ntivaRendita per \norfani e per \nfigliRendita per \norfani \n60 %*\n1/1 1/1 1/1 1/1\n fino a 15 120 1 260 1 512 1 008 378 504 756\n16 632 1 293 1 551 1 034 388 517 776\n18 144 1 326 1 591 1 060 398 530 795\n19 656 1 358 1 630 1 087 407 543 815\n21 168 1 391 1 669 1 113 417 556 835\n22 680 1 424 1 709 1 139 427 570 854\n24 192 1 457 1 748 1 165 437 583 874\n25 704 1 489 1 787 1 191 447 596 894\n27 216 1 522 1 826 1 218 457 609 913\n28 728 1 555 1 866 1 244 466 622 933\n30 240 1 588 1 905 1 270 476 635 953\n31 752 1 620 1 944 1 296 486 648 972\n33 264 1 653 1 984 1 322 496 661 992\n34 776 1 686 2 023 1 349 506 674 1 011\n36 288 1 719 2 062 1 375 516 687 1 031\n37 800 1 751 2 102 1 401 525 701 1 051\n39 312 1 784 2 141 1 427 535 714 1 070\n40 824 1 817 2 180 1 454 545 727 1 090\n42 336 1 850 2 220 1 480 555 740 1 110\n43 848 1 882 2 259 1 506 565 753 1 129\n45 360 1 915 2 298 1 532 575 766 1 149\n46 872 1 935 2 322 1 548 581 774 1 161\n48 384 1 956 2 347 1 564 587 782 1 173\n49 896 1 976 2 371 1 580 593 790 1 185\n51 408 1 996 2 395 1 597 599 798 1 197\n52 920 2 016 2 419 1 613 605 806 1 210\n54 432 2 036 2 443 1 629 611 814 1 222\n55 944 2 056 2 468 1 645 617 823 1 234\n57 456 2 076 2 492 1 661 623 831 1 246\n58 968 2 097 2 516 1 677 629 839 1 258\n60 480 2 117 2 520 1 693 635 847 1 270\n61 992 2 137 2 520 1 710 641 855 1 282\n63 504 2 157 2 520 1 726 647 863 1 294\n65 016 2 177 2 520 1 742 653 871 1 306\n66 528 2 197 2 520 1 758 659 879 1 318\n68 040 2 218 2 520 1 774 665 887 1 331\n69 552 2 238 2 520 1 790 671 895 1 343\n71 064 2 258 2 520 1 806 677 903 1 355\n72 576 2 278 2 520 1 822 683 911 1 367\n74 088 2 298 2 520 1 839 689 919 1 379\n75 600 2 318 2 520 1 855 696 927 1 391\n77 112 2 339 2 520 1 871 702 935 1 403\n78 624 2 359 2 520 1 887 708 943 1 415\n80 136 2 379 25 20 1 903 714 952 1 427\n81 648 2 399 2 520 1 919 720 960 1 439\n83 160 2 419 2 520 1 935 726 968 1 452\n84 672 2 439 2 520 1 951 732 976 1 464\n86 184 2 460 2 520 1 968 738 984 1 476\n87 696 2 480 2 520 1 984 744 992 1 488\n89 208 2 500 2 520 2 000 750 1 000 1 500\n 90 720 e più 2 520 2 520 2 016 756 1 008 1 512\n* Gli importi valgono anche per le rendite doppie per orfani e per le rendite intere doppie per figli previste dal diritto \nprevigente.\x0c15Fattori forfetari di rivalutazione, calcolati in funzione dell’en -\ntrata nell’assicurazione: insorgenza del caso d’assicurazione \nnel 2025\nPrima registra- \nzione nel CI*Fattore di \nrivalutazionePrima registra- \nzione nel CI*Fattore di \nrivalutazione\n1976 1,110 2001 1,000\n1977 1,098 2002 1,000\n1978 1,086 2003 1,000\n1979 1,075 2004 1,000\n1980 1, 063 2005 1,000\n1981 1,052 2006 1,000\n1982 1,042 2007 1,000\n1983 1,032 2008 1,000\n1984 1,022 2009 1,000\n1985 1,013 2010 1,000\n1986 1,004 2011 1,000\n1987 1,000 2012 1,000\n1988 1,000 2013 1,000\n1989 1,000 2014 1,000\n1990 1,000 2015 1,000\n1991 1,000 2016 1,000\n1992 1,000 2017 1,000\n1993 1,000 2018 1,000\n1994 1,000 2019 1,000\n1995 1,000 2020 1,000\n1996 1,000 2021 1,000\n1997 1,000 2022 1,000\n1998 1,000 2023 1,000\n1999 1,000 2024 1,000\n2000 1,000\n* La prima registrazione determinante nel CI, cha va presa in considerazione per \nil calcolo della rendita, può risalire al più presto all’anno civile del compimento dei \n21 anni.\x0c16Chiarimenti e altre \ninformazioni\nQuesto opuscolo informativo presenta solo una panoramica riassun -\ntiva. Per la valutazione dei singoli casi fanno stato esclusivamente le \ndisposizioni legali in vigore. Per ulteriori informazioni ci si può rivolgere \nalle casse di compensazione o alle loro agenzie. L’elenco delle casse di \ncompensazione è pubblicato all’indirizzo Internet www.avs-ai.ch .\nI termini relativi allo stato civile hanno anche il significato seguente:\n• matrimonio: unione domestica registrata,\n• divorzio: scioglimento giudiziale dell’unione domestica registrata,\n• decesso del coniuge: decesso del partner registrato.\nPubblicato dal Centro d’informazione AVS/AI in collaborazione con \nl’Ufficio federale delle assicurazioni sociali.\nEdizione novembre 2024. La riproduzione, anche solo parziale, è \nautorizzata soltanto con il consenso scritto del Centro d’informazione \nAVS/AI.\nQuesto opuscolo informativo può essere richiesto alle casse di compen -\nsazione, alle loro agenzie e agli uffici AI. Numero di ordinazione 3.03/i. \nÈ disponibile anche su www.avs-ai.ch .\n Ulteriori informazioni, pubblicazioni e video esplicativi.\n3.03-25/01-I',
'4.12 Leistungen der IV\nEingliederungsorientierte \nBeratung, Früherfassung \nStand am 1. Januar 2024und Frühintervention\x0c2Auf einen Blick\nDie eingliederungsorientierte Beratung, die Früherfassung und die Frühin -\ntervention sind präventive Mittel der Invalidenversicherung (IV) und drei \nverschiedene Phasen im IV-Verfahren, die es zu unterscheiden gilt.\nMit der eingliederungsorientierten Beratung bietet die IV-Stelle unabhän -\ngig von einem konkreten Fall Beratungsgespräche und allgemeine Informa -\ntionen zur IV an. \nIm Rahmen der Früherfassung sollen arbeitsunfähige, von Arbeitsunfähig -\nkeit oder von Invalidität bedrohte Personen so rasch wie möglich mit Fach -\npersonen der IV in Kontakt treten. Sobald der Kontakt hergestellt ist, wird \nmöglichst schnell darüber entschieden, ob eine IV-Anmeldung notwendig \nist. \nSobald eine IV-Anmeldung eingereicht wird, prüft die zuständige IV-Stelle \ngemeinsam mit der versicherten Person und den involvierten Partnern, ob \ngeeignete Frühinterventionsmassnahmen den Erhalt des Arbeitsplatzes \noder eine rasche Reintegration ins Arbeitsleben ermöglichen können.\nDieses Merkblatt informiert Versicherte, Eingliederungsakteure sowie Mel -\ndeberechtigte über die eingliederungsorientierte Beratung, die Früherfas -\nsung und die Frühintervention.\n \x0c3Eingliederungsorientierte Beratung\n1 Was ist eine eingliederungsorientierte Beratung?\nDie eingliederungsorientierte Beratung umfasst niederschwellige und fall- \nunabhängige Beratungsgespräche durch die IV-Stelle. Darunter fallen bei -\nspielsweise allgemeine Informationen über den Auftrag und die Leistungen \nder IV, über den Umgang mit Erkrankungen am Arbeitsplatz, die Meldung \nzur Früherfassung oder die Anmeldung für IV-Leistungen.\n2 An wen richtet sich die eingliederungsorientierte \nBeratung?\nDie eingliederungsorientierte Beratung richtet sich an versicherte Perso -\nnen, Arbeitgebende, behandelnde Ärzte sowie betroffene Akteure des \nSchul- und Bildungswesens auf deren Ersuchen hin.\n3 Besteht ein Rechtsanspruch auf eingliederungs- \norientierte Beratung?\nEs besteht kein Rechtsanspruch auf eingliederungsorientierte Beratung.\x0c4Früherfassungsphase \nFrüherfassung\n4 Was ist eine Früherfassung?\nDie Früherfassung zielt darauf ab, dass die IV-Stelle so früh wie möglich mit \nPersonen in Kontakt tritt, die aus gesundheitlichen Gründen arbeitsunfähig \noder von Arbeitsunfähigkeit bedroht sind und bei denen die Gefahr einer \nChronifizierung der gesundheitlichen Beschwerden besteht. Kommt die \nIV-Stelle zum Schluss, dass ohne geeignete Massnahmen eine Invalidität \ndroht, fordert sie die betroffene Person auf, sich bei der IV anzumelden. \nDie Früherfassung ermöglicht der IV ein rasches Eingreifen und präventives \nVorgehen zugunsten der beruflichen Eingliederung. \n5 Können Jugendliche zur Früherfassung gemeldet \nwerden?\nJa. Jugendliche und junge erwachsene Personen zwischen 13 und 25 Jah -\nren, können sich melden oder gemeldet werden, wenn sie:\n• von Invalidität bedroht sind, \n• noch keine Erwerbstätigkeit ausgeübt haben und \n• sich in einem kantonalen Brückenangebot befinden oder von einer \nkantonalen Koordinationsstelle für Jugendliche in ihrer beruflichen Ein -\ngliederung unterstützt werden.\nAuch Jugendliche, die bereits erwerbstätig waren und erwachsene Perso -\nnen, die arbeitsunfähig oder von Arbeitsunfähigkeit bedroht sind, können \nsich melden oder gemeldet werden.\nMeldung zur Früherfassung\n6 Wer kann eine Meldung einreichen?\nFolgende Personen und Instanzen können eine Meldung einreichen:\n• die versicherte Person sowie ihre gesetzliche Vertretung\n• die mit der versicherten Person im gemeinsamen Haushalt lebenden \nFamilienangehörigen \n• die Arbeitgebenden\n• die behandelnden Ärzte und Chiropraktiker \n• der Krankentaggeldversicherer\x0c5• die privaten Versicherungsunternehmen, die eine Krankentaggeld- \noder Rentenversicherung anbieten\n• der Unfallversicherer\n• die Einrichtungen der beruflichen Vorsorge\n• die Arbeitslosenversicherung\n• die Sozialhilfeorgane\n• die Militärversicherung\n• der Krankenversicherer\n• die kantonalen Instanzen und Durchführungsstellen, die für die Unter -\nstützung und die Förderung der beruflichen Eingliederung von Jugend -\nlichen zuständig sind\n7 Wie erfolgt die Meldung?\nDie Meldung ist schriftlich bei der IV-Stelle des Wohnsitzkantons der ver -\nsicherten Person einzureichen. Das Formular kann bei den IV-Stellen, den \nAusgleichskassen und deren Zweigstellen sowie unter www.ahv-iv.ch be-\nzogen werden.\n8 Wird die versicherte Person vorgängig über die \nMeldung informiert?\nJa. Personen und Instanzen, die eine versicherte Person zur Früherfassung \nbei der IV-Stelle melden, müssen diese vorgängig darüber informieren.\n9 Ist die Meldung zur Früherfassung eine Anmeldung für \nIV-Leistungen?\nNein. Die Meldung zur Früherfassung gilt nicht als Anmeldung für Leistun -\ngen der IV. In der Früherfassungsphase werden keine Leistungen der IV \nzugesprochen.\x0c6Früherfassungsgespräch\n10 Das Meldeformular wurde der IV-Stelle eingereicht, \nwie geht es jetzt weiter?\nDie IV-Stelle kann die gemeldete Person zu einem Gespräch einladen. In \ndiesem wird\n• sie über den Zweck der Früherfassung informiert,\n• eine Analyse ihrer medizinischen, beruflichen und sozialen Situation \nvorgenommen,\n• sie darüber aufgeklärt, welche Informationen die IV-Stelle bei wem ein -\nholt,\n• geprüft, ob eine IV-Anmeldung angezeigt ist.\n11 Wer kann an diesem Gespräch teilnehmen?\nMit dem Einverständnis der versicherten Person können Dritte am Gespräch \nteilnehmen, zum Beispiel die Person/Institution, welche den Fall gemeldet \nhat und/oder Arbeitgebende. Es steht der versicherten Person ebenfalls \noffen, sich von einer Vertrauensperson begleiten zu lassen. Hält es die IV-\nStelle für angezeigt, kann auch ein Arzt oder eine Ärztin des regionalen \närztlichen Dienstes (RAD) hinzugezogen werden.\n12 Wann erfolgt kein Gespräch?\nGeht aus der Meldung bereits eindeutig hervor, dass eine sofortige \nIV-Anmeldung angezeigt oder die IV nicht zuständig ist, wird auf ein \nGespräch verzichtet.\n13 Wo kann die IV-Stelle weitere Informationen einholen?\nGenügen die Informationen aus dem Gespräch für den Entscheid nicht, \nkann die IV-Stelle mit der Vollmacht der versicherten Person weitere Infor -\nmationen einholen, unter anderem bei medizinischem Fachpersonal, wei -\nteren Versicherungen, Arbeitgebenden oder der Sozialhilfe.\nEnde der Früherfassungsphase\n14 Wann endet die Früherfassung?\nMit dem Eingang der IV-Anmeldung oder der Mitteilung an die versicherte \nPerson, es sei keine solche nötig, endet die Früherfassungsphase. \x0c7Anmeldung für IV-Leistungen\n15 Wer kann eine IV-Anmeldung einreichen?\nGrundsätzlich muss die versicherte Person die IV-Anmeldung selbst einrei -\nchen. Auch ihr gesetzlicher Vertreter bzw. die Behörden oder Dritte, wel -\nche die Person regelmässig unterstützen bzw. dauernd betreuen, können \neinen Anspruch auf Leistungen der IV geltend machen.\n16 Wie erfolgt die Anmeldung?\nDie Anmeldung muss bei der IV-Stelle des Wohnsitzkantons der versicher -\nten Person eingereicht werden. Das entsprechende Antragsformular kann \nbei den IV-Stellen, den Ausgleichskassen und deren Zweigstellen sowie un -\nter www.ahv-iv.ch bezogen werden. \nFrühinterventionsphase \nFrühintervention\n17 Was ist das Ziel der Frühintervention?\nZiel der Frühintervention ist es, durch rasches Handeln die Arbeits- und \nErwerbsfähigkeit der betroffenen Person möglichst zu erhalten oder zu \nverbessern. Jugendliche, die bereits erwerbstätig waren, Arbeitsunfähige \noder von einer länger dauernden Arbeitsunfähigkeit bedrohte Erwachsene, \nwerden dabei unterstützt, ihren Arbeitsplatz im bisherigen Betrieb beizu -\nbehalten, bzw. betriebsintern oder in einem anderen Betrieb einen neuen \nArbeitsplatz zu übernehmen. \nMit den Frühinterventionsmassnahmen kann die IV auch Jugendliche und \njunge Erwachsene, die noch nicht erwerbstätig waren und von einer Inva -\nlidität bedroht sind, frühzeitig auf dem Weg in eine berufliche Ausbildung \noder in eine erste Anstellung im ersten Arbeitsmarkt unterstützen. \nDie Frühinterventionsphase beginnt mit der Einreichung der IV-Anmeldung \nund erstreckt sich maximal über eine Dauer von zwölf Monaten.\x0c8Bestandsaufnahme\n18 Was beinhaltet die Bestandsaufnahme?\nNach Eingang der IV-Anmeldung nimmt die IV-Stelle eine Bestandsauf -\nnahme vor. Diese hat zum Ziel, ein umfassendes Bild von der Gesamt- \nsituation der versicherten Person zu erhalten, das nebst den gesundheit -\nlichen und beruflichen Aspekten, den Ressourcen und Einschränkungen \nauch die familiäre, soziale und finanzielle Situation mitberücksichtigt. Ge -\nstützt auf diese Informationen entscheidet die IV-Stelle, ob Frühinterventi -\nonsmassnahmen, Integrationsmassnahmen oder Massnahmen beruflicher \nArt angezeigt sind. \nAuf die Bestandsaufnahme kann verzichtet werden, wenn aus der IV-An -\nmeldung hervorgeht, dass die Invalidenversicherung nicht zuständig, oder \ndie Eingliederung nicht möglich ist oder wenn nicht die Frage der Einglie -\nderung oder der Rente im Zentrum steht, sondern ein Hilfsmittel oder eine \nHilflosenentschädigung.\n19 Wer kann an der Bestandsaufnahme teilnehmen?\nDie versicherte Person kann sich während der Bestandsaufnahme, die in \nForm eines oder mehrerer Gespräche stattfindet, von weiteren Personen \nbegleiten lassen (z.B. Arbeitgebenden, behandelnde Ärztin oder behan -\ndelnder Arzt). Die Gespräche werden von der Eingliederungsfachperson \ngeführt. Hält es die IV-Stelle für angezeigt, kann auch ein Arzt oder eine \nÄrztin des regionalen ärztlichen Dienstes (RAD) hinzugezogen werden. \nEingliederungsplan \n20 Was beinhaltet der Eingliederungsplan?\nGestützt auf die Bestandsaufnahme wird ein auf die versicherte Person \nzugeschnittener Eingliederungsplan ausgearbeitet. Der Eingliederungsplan\n• enthält die zu erreichenden Ziele und die vorgesehenen Massnahmen,\n• regelt die Kooperation zwischen den beteiligten Parteien,\n• definiert die Verantwortlichkeiten und Fristen.\nAuf Basis des Eingliederungsplanes kann eine Zielvereinbarung erstellt wer -\nden. \x0c921 Was sind Massnahmen der Frühintervention?\nMassnahmen der Frühintervention sind: \nWährend der obligatorischen Schulzeit ab dem vollendeten 13. Altersjahr: \n• Berufsberatung\n• Arbeitsvermittlung (Unterstützung bei der Suche nach einem Ausbil -\ndungsplatz)\nFür Jugendliche nach der obligatorischen Schulzeit und für Erwachsene: \n• Anpassungen des Arbeitsplatzes \n• Ausbildungskurse \n• Arbeitsvermittlung (Unterstützung beim Arbeitsplatzerhalt und bei der \nStellensuche)\n• Berufsberatung\n• Sozialberufliche Rehabilitation \n• Beschäftigungsmassnahmen \n• Beratung und Begleitung\n22 Besteht ein Rechtsanspruch auf \nFrühinterventionsmassnahmen?\nNein. Es besteht kein Rechtsanspruch auf Frühinterventionsmassnahmen. \n23 Besteht Anspruch auf ein IV-Taggeld?\nNein. Während der Durchführung dieser Massnahmen werden keine \nTaggelder der IV ausbezahlt.\nEnde der Frühinterventionsphase\n24 Wann endet die Frühintervention?\nDer Frühinterventionsprozess wird abgeschlossen durch einen Entscheid \nin Form \n• einer Mitteilung, dass der versicherten Person Integrationsmassnah -\nmen oder Massnahmen beruflicher Art gewährt werden,\n• der Mitteilung, die Rentenfrage werde geprüft, oder\n• einer ablehnenden Leistungsverfügung.\x0c10Auskünfte und weitere \nInformationen\nDieses Merkblatt vermittelt nur eine Übersicht. Für die Beurteilung von \nEinzelfällen sind ausschliesslich die gesetzlichen Bestimmungen mass -\ngebend. Die IV-Stellen, die Ausgleichskassen und ihre Zweigstellen \ngeben gerne Auskunft. Ein Verzeichnis aller Ansprechpartner finden \nSie unter www.ahv-iv.ch .\nHerausgegeben von der Informationsstelle AHV/IV in Zusammenarbeit \nmit dem Bundesamt für Sozialversicherungen.\nNachdruck November 2024. Auch auszugsweiser Abdruck ist nur mit \nschriftlicher Einwilligung der Informationsstelle AHV/IV erlaubt. \nDieses Merkblatt kann bei den Ausgleichskassen und deren Zweig- \nstellen sowie den IV-Stellen bezogen werden. Bestellnummer 4.12/d. \nEs ist ebenfalls unter www.ahv-iv.ch verfügbar.\n Weitere Informationen, Publikationen und Erklärvideos.\n4.12-24/01-D',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `tri-acc`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| **cosine_accuracy** | **0.1111** |
#### Information Retrieval
* Dataset: `ir-metrics`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.0139 |
| cosine_precision@1 | 0.0139 |
| cosine_precision@5 | 0.0194 |
| cosine_precision@10 | 0.0153 |
| cosine_recall@1 | 0.0139 |
| cosine_recall@5 | 0.0972 |
| cosine_recall@10 | 0.1528 |
| cosine_ndcg@1 | 0.0139 |
| cosine_ndcg@5 | 0.0585 |
| **cosine_ndcg@10** | **0.0766** |
| cosine_mrr@1 | 0.0139 |
| cosine_mrr@5 | 0.0456 |
| cosine_mrr@10 | 0.0532 |
| cosine_map@100 | 0.0818 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 645 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 645 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 23.43 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 230 tokens</li><li>mean: 5086.11 tokens</li><li>max: 8192 tokens</li></ul> | <ul><li>min: 230 tokens</li><li>mean: 4914.49 tokens</li><li>max: 8192 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:-------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Quand puis-je faire recours ?</code> | <code>4.06 Prestations de l’AI<br>La procédure dans l’AI<br>Etat au 1er janvier 2024 2En bref<br>Les personnes qui sollicitent l’intervention de l’AI dans le cadre de la <br>détection précoce peuvent adresser une communication à l’office AI du <br>canton de domicile de la personne assurée. <br>La personne assurée doit faire valoir son droit à des prestations de l’assu -<br>rance-invalidité (AI) au moyen du formulaire officiel. Après réception de la <br>demande, l’office AI examine si les conditions du droit à des prestations <br>sont remplies. Il recueille pour cela tous les renseignements nécessaires à <br>l’instruction de la demande. L’instruction prend en considération toutes les <br>prestations de l’AI. Ce n’est qu’après avoir examiné toutes les circonstances <br>du cas que l’AI peut décider si la personne assurée obtiendra des presta -<br>tions. <br>Le présent mémento informe les assurés, ainsi que les personnes habilitées <br>à faire une communication à l’AI, sur la procédure dans l’AI. 3Communication<br>1 Qui peut adresser une com...</code> | <code>1.03 Généralités <br>Bonifications pour<br>tâches d’assistance<br>Etat au 1er janvier 2021 2En bref<br>Les dispositions légales prévoient la prise en compte des bonifications pour <br>tâches d’assistance dans le calcul des rentes.<br>Ces bonifications s’ajoutent aux revenus formateurs de rente et vous per -<br>mettent de toucher une rente plus élevée si vous avez pris soin d’un parent <br>dépendant. Il ne s’agit toutefois pas d’une prestation en espèces.<br>Vous pouvez faire valoir votre droit aux bonifications pour tâches <br>d’assistance au plus tôt l’année civile qui suit votre 17e anniversaire et au <br>plus tard jusqu’au 31 décembre de celle qui précède l’âge de référence.<br>Droit aux bonifications pour tâches d’assistance<br>1 Quand ai-je droit à ces bonifications ?<br>Vous y avez droit si vous avez pris soin de parents vivant à proximité. Sont <br>considérés comme parents, le conjoint, les enfants, les parents, les frères <br>et soeurs, les grands-parents, les arrière-grands-parents, les petits-enfants, <br>les beaux-parents,...</code> |
| <code>Welche Löhne sind von der Beitragsabrechnung ausgenommen?</code> | <code>2.04 Beiträge<br>Beiträge an die AHV,<br>die IV, die EO und die ALV<br>auf geringfügigen Löhnen<br>Stand am 1. Januar 2025 2Auf einen Blick<br>Grundsätzlich müssen von jeder Lohnzahlung AHV/IV/EO- und ALV-Bei -<br>träge abgezogen werden (siehe Ziffern 6 - 8). Dies gilt uneingeschränkt für <br>Personen, die<br>• in einem Privathaushalt beschäftigt sind (beitragsfrei bleiben jedoch <br>Löhne bis zu 750 Franken pro Jahr und Arbeitgeberin oder Arbeitgeber <br>an Jugendliche bis 25 Jahre; siehe Ziffer 6) oder<br>• von Tanz- und Theaterproduzenten, Orchestern, Phono- und Audiovi -<br>sionsproduzenten, Radio und Fernsehen, sowie von Schulen im künst -<br>lerischen Bereich entlöhnt werden.<br>In anderen Branchen müssen keine Beiträge erhoben werden (siehe Ziffern <br>1 - 5), wenn<br>• der Lohn 2 500 Franken pro Jahr und Arbeitgeberin oder Arbeitgeber <br>nicht übersteigt, und<br>• die Arbeitnehmenden die Beitragsentrichtung nicht verlangen.<br>Dieses Merkblatt informiert Sie als Arbeitgeberin oder Arbeitgeber über <br>die Beitragsentrichtung auf ger...</code> | <code>5.02 Prestazioni complementari <br>Diritto alle prestazioni <br>complementari<br>all’AVS e all’AI<br>Stato al 1° gennaio 2025 2In breve<br>Le prestazioni complementari (PC) sono concesse quando le rendite e il <br>reddito non coprono il fabbisogno vitale. Sono un diritto. Assieme all’AVS <br>e all’AI, le PC costituiscono un importante fondamento del nostro Stato <br>sociale.<br>Le pagine seguenti permetteranno agli assicurati di effettuare un calcolo <br>approssimativo del loro eventuale diritto a prestazioni complementari. Se <br>le spese sono superiori ai redditi o i redditi superano solo di poco le spese, <br>potrebbe esistere un diritto a PC, a condizione che la loro sostanza non <br>superi 100 000 franchi (persone sole) o 200 000 franchi (coppie di coniugi) <br>o 50 000 franchi (orfani che hanno diritto a una rendita e figli che danno <br>diritto a una rendita per figli dell’AVS o dell’AI.<br>Il foglio di calcolo allegato è destinato esclusivamente ai beneficiari di ren -<br>dite AVS e AI che vivono a casa. I cittadini stranieri s...</code> |
| <code>Quels sont les droits à l'allocation de l'autre parent en cas de décès d'un des parents ?</code> | <code>1.2024 Généralités<br>Modifications au<br>1er janvier 2024<br>Etat au 1er janvier 2024 2En bref<br>Le présent mémento vous renseigne sur les modifications entrant en <br>vigueur le 1er janvier 2024.<br>Stabilisation de l’AVS (AVS 21)<br>Vous trouverez des informations détaillées sur la réforme AVS 21 dans les <br>mémentos Stabilisation de l’AVS (AVS 21) Qu’est-ce qui change ? , 3.01 - <br>Rentes de vieillesse et allocations pour impotent de l’AVS , 3.04 - Flexibili -<br>sation de la retraite , 3.06 - Calcul anticipé de la rente et 3.08 - Nouveau <br>calcul de la rente après l’âge de référence .<br>Ces nouveautés sont expliquées clairement et simplement dans une vidéo <br>sur la stabilisation de l’AVS (AVS 21) : https://ahv-iv.ch/r/videoahv21fr 3Allocations pour perte de gain (APG)<br>1 Allocation de paternité ou de l’épouse de la mère <br>Depuis l’entrée en vigueur, le 1er juillet 2022, des modifications légales liées <br>au mariage pour tous, l’épouse de la mère a également droit, à certaines <br>conditions, à l’allocation de pater...</code> | <code>Prestations complémentaires : <br>versements à des tiers des <br>taxes journalières du home <br>ou de l’hôpital<br>Etat au 1er janvier 2025 2En bref<br>À partir du 1er janvier 2021, les bénéficiaires de prestations complémen -<br>taires (PC) peuvent faire verser un certain montant de leurs prestations <br>directement au home ou à l’hôpital dans lequel ils séjournent.<br>Selon l’art. 21c de l’ordonnance sur les prestations complémentaires à l’as -<br>surance-vieillesse, survivants et invalidité, l’ordre suivant s’applique : <br>a. le montant pour l’assurance obligatoire des soins est d’abord versé à <br>l’assureur-maladie ; <br>b. un montant n’excédant pas le montant pour les dépenses personnelles <br>est ensuite versé au bénéficiaire ; des montants différents s’appliquent <br>selon les cantons ; <br> (selon le chiffre 4260.02 DPC le loyer cas écheant doit être versé au <br>bénéficiaire)<br>c. après déduction des montants prévus aux let. a et b, un montant n’ex -<br>cédant pas la taxe journalière est versé au fournisseur de prestations...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 1
- `per_device_eval_batch_size`: 1
- `fp16`: True
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 1
- `per_device_eval_batch_size`: 1
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | tri-acc_cosine_accuracy | ir-metrics_cosine_ndcg@10 |
|:------:|:----:|:-------------:|:-----------------------:|:-------------------------:|
| 0.7752 | 500 | 0.1086 | 0.0972 | 0.0766 |
| 1.0 | 645 | - | 0.1111 | 0.0766 |
| 1.5504 | 1000 | 0.042 | 0.1111 | 0.0766 |
| 2.0 | 1290 | - | 0.1111 | 0.0766 |
| 2.3256 | 1500 | 0.0055 | 0.1111 | 0.0766 |
| 3.0 | 1935 | - | 0.1111 | 0.0766 |
### Framework Versions
- Python: 3.11.7
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.4.1+cu121
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
lmstudio-community/Qwen3-0.6B-GGUF
|
lmstudio-community
| 2025-04-29T14:53:04Z | 2,479 | 1 | null |
[
"gguf",
"text-generation",
"base_model:Qwen/Qwen3-0.6B",
"base_model:quantized:Qwen/Qwen3-0.6B",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-04-28T16:57:39Z |
---
quantized_by: bartowski
pipeline_tag: text-generation
base_model: Qwen/Qwen3-0.6B
base_model_relation: quantized
---
## 💫 Community Model> Qwen3 0.6B by Qwen
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [Qwen](https://huggingface.co/Qwen)<br>
**Original model**: [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b5200](https://github.com/ggerganov/llama.cpp/releases/tag/b5200)<br>
## Technical Details
Supports a context length of up to 32k tokens
Supports `/no_think` to disable reasoning, just add it at the end of your prompt
Supports both thinking and non-thinking modes withe enhanced reasoning in both for significantly enhanced mathematics, coding, and commonsense
Excels at creative writing, role-playing, multi-turn dialogues, and instruction following
Advanced agent capabilities and support for over 100 languages and dialects
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.