Search is not available for this dataset
user
stringlengths
3
28
created_at
timestamp[us]
body
stringlengths
1
173k
issue_number
int64
1
2.51k
muellerzr
2024-12-11T02:52:48
Correct, that's not what we want to do because with the fix to how we calculate the number of items in the batch, the losses will not align and things will be off, so we *don't* divide the loss by accumulation steps if we know that value. I'd need to play with this a bit as I'm not 100% sure if we can just modify the grads for clipping without modifying the overall loss we just calculated :thinking:
2,456
AIR-hl
2024-12-11T03:10:34
> The issue arises from how the accelerator is configured in [`create_accelerator_and_postprocess`](https://github.com/huggingface/transformers/blob/91b8ab18b778ae9e2f8191866e018cd1dc7097be/src/transformers/trainer.py#L4990). @qgallouedec I have a new question that if the problem arises from [create_accelerator_and_postprocess](https://github.com/huggingface/transformers/blob/91b8ab18b778ae9e2f8191866e018cd1dc7097be/src/transformers/trainer.py#L4990) in `transformers.Trainer`, why `trl.SFTTrainer`'s behavior is normal, but `trl.DPOTrainer` isnt, they both inherit from `transformers.Trainer` sft, `batch_size=4`, `accumulation=8` ![7cf799b818cdced95fc4632de02a8fba](https://github.com/user-attachments/assets/35e77e32-544a-4e25-90d9-a3b2ba2b8525) sft, `batch_size=2`, `accumulation=16` ![1eba3468eab71db9185de3a1ab0120b9](https://github.com/user-attachments/assets/2eadda34-61e4-4cf4-ba63-153d23d7bcd1) sft, `batch_size=1`, `accumulation=32` ![c6e2266b5eb3ff8736fe652a85124a41](https://github.com/user-attachments/assets/02a34f43-7b98-4ac1-b2c3-c33cf6cb66a0)
2,456
qgallouedec
2024-12-11T10:21:00
> @qgallouedec I have a new question that if the problem arises from [create_accelerator_and_postprocess](https://github.com/huggingface/transformers/blob/91b8ab18b778ae9e2f8191866e018cd1dc7097be/src/transformers/trainer.py#L4990) in `transformers.Trainer`, why `trl.SFTTrainer`'s behavior is normal, but `trl.DPOTrainer` isnt, they both inherit from `transformers.Trainer` I can't explain it right now. Any idea?
2,456
qgallouedec
2024-12-11T10:41:00
I may have found the solution: https://github.com/huggingface/transformers/pull/35207 Running some experiments...
2,456
qgallouedec
2024-12-11T11:12:53
## Does it solve the issue? ### Before the fix same effective batch size (32) - grad accumulation = 32 / batch_size = 1 - grad accumulation = 8 / batch_size = 4 ![Screenshot 2024-12-11 at 12 04 50](https://github.com/user-attachments/assets/d4b7513b-23c3-427a-aed7-72614bf337d0) We can see here that the grad_norm is different while it should be the same. ### After the fix same effective batch size (32) - grad accumulation = 32 / batch_size = 1 - grad accumulation = 8 / batch_size = 4 ![Screenshot 2024-12-11 at 12 04 40](https://github.com/user-attachments/assets/40b10719-28a5-4cdc-b2ed-c39e9421b2d9) Now the grad_norm is the same. ## Does it impact the results? ### Config 1 grad accumulation = 32 / batch_size = 1 (effective batch size = 32). Curves are _before the fix_ and _after the fix_ ![Screenshot 2024-12-11 at 12 04 14](https://github.com/user-attachments/assets/81b7135a-921f-4275-8dd5-27cce38bb612) The only value impacted is the grad_norm, no impact on loss ### Config 2 grad accumulation = 8 / batch_size = 4 (effective batch size = 32). Curves are _before the fix_ and _after the fix_ ![Screenshot 2024-12-11 at 12 03 13](https://github.com/user-attachments/assets/6a26889f-7314-4b10-9919-16b31bc0c77a) The only value impacted is the grad_norm, no impact on loss
2,456
AIR-hl
2024-12-11T13:01:32
@qgallouedec Thanks for ur work! So this bug actually only affects the reported logs and not the training results, right? :)
2,456
qgallouedec
2024-12-11T13:03:18
That's what the results suggest yes
2,456
qgallouedec
2024-12-11T13:25:54
Leaving the issue open until https://github.com/huggingface/transformers/pull/35207 is properly merged
2,456
August-murr
2024-12-10T09:29:57
@qgallouedec how's everything so far? Is there anything you'd like me to change?
2,455
qgallouedec
2024-12-10T09:38:57
Thanks @August-murr for this PR! As mentioned in this [comment](https://github.com/huggingface/trl/issues/2429#issuecomment-2515244907), I think it would be better to start by only adding this feature to the functions of `trl/data_utils.py` and check that everything works as expected, without adding it to any trainer for the moment. In fact, my idea is to gradually drop the functions from `trl/extras/dataset_formatting.py`.
2,455
qgallouedec
2024-12-10T11:34:47
Looks good! We just need to update the docstrings of the functions and add some unittests
2,455
August-murr
2024-12-11T11:58:34
I'm assuming we should also integrate the functions from `data_utils.py` into all the trainers, correct?
2,455
qgallouedec
2024-12-11T12:12:44
> I'm assuming we should also integrate the functions from `data_utils.py` into all the trainers, correct? Indeed, but we'll do that in follow-up PR. I think it's the best way to go
2,455
HuggingFaceDocBuilderDev
2024-12-11T12:21:37
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2455). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,455
August-murr
2024-12-11T19:09:33
@qgallouedec let me know if there is anything else I need to do.
2,455
qgallouedec
2024-12-11T21:44:24
Looks good to me! Just one munir comment
2,455
qgallouedec
2024-12-12T15:46:38
```python from transformers import AutoProcessor from trl import apply_chat_template tokenizer = AutoProcessor.from_pretrained("trl-internal-testing/tiny-LlamaForCausalLM-3.2") # Define dummy test tools def get_current_temperature(location: str): """ Gets the temperature at a given location. Args: location: The location to get the temperature for """ return 22.0 # Define test case test_case = { "prompt": [ {"content": "Whats the temperature in London?", "role": "user"}, ] } # Test with tools result_with_tools = apply_chat_template(test_case, tokenizer, tools=[get_current_temperature]) print(result_with_tools["prompt"]) ``` ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> Environment: ipython Cutting Knowledge Date: December 2023 Today Date: 12 Dec 2024 <|eot_id|><|start_header_id|>user<|end_header_id|> Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt. Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.Do not use variables. { "type": "function", "function": { "name": "get_current_temperature", "description": "Gets the temperature at a given location.", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The location to get the temperature for" } }, "required": [ "location" ] } } } Whats the temperature in London?<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` Nice!
2,455
HuggingFaceDocBuilderDev
2024-12-09T17:55:56
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2454). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,454
kashif
2024-12-09T13:30:15
@NINGBENZHE do you know where in the code the issue is occurring?
2,453
NINGBENZHE
2024-12-09T14:09:34
> @NINGBENZHE do you know where in the code the issue is occurring? I haven't found the issue yet, but after making the modifications, the critic's loss is functioning normally, and the optimizer's functionality has been restored.
2,453
kashif
2024-12-09T14:10:58
ok let me try to pin-point the issue... and perhaps try to add a failing test?
2,453
NINGBENZHE
2024-12-09T14:23:17
> ok let me try to pin-point the issue... and perhaps try to add a failing test? You can repeat the same data and observe the critic's loss; it remains unchanged.
2,453
NINGBENZHE
2024-12-09T14:24:38
I found that the issue might have been introduced by this PR. https://github.com/huggingface/trl/commit/16fa13ce728e537a91742571b0c4824fb3a98a30
2,453
HuggingFaceDocBuilderDev
2024-12-09T14:44:23
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2453). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,453
kashif
2024-12-09T14:47:50
@NINGBENZHE can you kindly run `make precommit` in the root dir to fix the formatting?
2,453
NINGBENZHE
2024-12-10T01:10:44
> @NINGBENZHE can you kindly run `make precommit` in the root dir to fix the formatting? I made the submission directly on the web, without using local Git, and only modified the parameter names, so it should not have introduced any new formatting issues. Can you force the merge?
2,453
kashif
2024-12-12T10:45:09
closing as these changes have been merged into the PR #2463
2,453
asparius
2024-12-10T14:08:59
You have two gpus, but you only use it 1 in your accelerate config. You could also use deepspeed to further decrease the memory footprint. Lastly, keep per_device_train_batch_size as low as possible, instead increase gradient_accumulation step.
2,452
gp-1108
2024-12-12T18:20:59
Hi @asparius, thank you for the suggestions. As I am running this code on a computing cluster I am having some problems with [deepspeed](https://github.com/microsoft/DeepSpeed/issues/2772#issuecomment-2151669077). I would like to keep this issue open and get back once I have solved those issues
2,452
qgallouedec
2024-12-13T22:33:34
It might come from your data. Do you have long sequences in your dataset? It's very recommended to set these arguments: `max_length`, `max_prompt_length`, `max_completion_length` in the `DPOConfig`. Eg. ```python DPOConfig( ..., max_prompt_length=128, max_completion_length=512, ) ```
2,452
asparius
2024-12-14T00:46:01
@gp-1108 I faced similar issues. I would recommend to check available modules in your cluster by a command like "module avail" and load a cuda installation by "module load", of course this is assuming you are in slurm env. If you dont have cuda in available modules, perhaps you could ask cluster admins to download it. I think you should be good after this.
2,452
gp-1108
2024-12-16T00:52:13
Hi all, I have finally fixed all of the CUDA issues with the computing cluster 😮‍💨. However, I did not fix the original issue. I am still running OOM even after using two full A40s. I have tweaked both the script and the accelerate config so I will leave them below (I hope everything is setup as it should be). **TRL ENV:** ``` - Platform: Linux-6.12.1-1.el8.elrepo.x86_64-x86_64-with-glibc2.35 - Python version: 3.10.12 - PyTorch version: 2.5.1 - CUDA device(s): NVIDIA A40, NVIDIA A40 - Transformers version: 4.46.0 - Accelerate version: 1.2.1 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: MULTI_GPU - mixed_precision: no - use_cpu: False - debug: True - num_processes: 2 - machine_rank: 0 - num_machines: 1 - gpu_ids: all - rdzv_backend: static - same_network: True - main_training_function: main - enable_cpu_affinity: False - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] - dynamo_config: {'dynamo_backend': 'INDUCTOR'} ``` **SCRIPT:** ```python import argparse from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig from peft import PeftConfig, PeftModel, LoraConfig from trl import DPOConfig, DPOTrainer import utils as ut import torch from accelerate import Accelerator import os os.environ['WANDB_DISABLED'] = 'true' #import wandb def print_memory_usage(description="Memory Usage"): """ Prints the current memory usage for all available GPU devices. Args: description (str): A short description for context. """ if torch.cuda.is_available(): print(f"{description}:") for i in range(torch.cuda.device_count()): device = f"cuda:{i}" free_mem, total_mem = torch.cuda.mem_get_info(device) used_mem = total_mem - free_mem total_mem_mb = total_mem / 1024**2 # Convert to MB free_mem_mb = free_mem / 1024**2 # Convert to MB used_mem_mb = used_mem / 1024**2 # Convert to MB print(f" Device: {device}") print(f" Total Memory: {total_mem_mb:.2f} MB") print(f" Used Memory: {used_mem_mb:.2f} MB") print(f" Free Memory: {free_mem_mb:.2f} MB") else: print("CUDA is not available on this system.") def main(args): """ wandb.init( # set the wandb project where this run will be logged project="my-awesome-project", ) """ accelerator = Accelerator( mixed_precision="no", gradient_accumulation_steps=args.gradient_acc, ) print(args) print_memory_usage(description="Before anything") # Load dataset print("Loading dataset...") dataset = ut.load_dataset(args.dataset_path) dataset = dataset.train_test_split(test_size=args.test_split) # Load PEFT configuration print(f"Loading PEFT model configuration from {args.peft_model_id}...") config = PeftConfig.from_pretrained(args.peft_model_id) # Configure quantization bnb_config = BitsAndBytesConfig( load_in_4bit=True, llm_int8_threshold=6.0, llm_int8_has_fp16_weight=False, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", ) # Load base model print(f"Loading base model from {config.base_model_name_or_path}...") model = AutoModelForCausalLM.from_pretrained( config.base_model_name_or_path, quantization_config=bnb_config, trust_remote_code=True, # Hardcoded torch_dtype=torch.bfloat16, ) model.config.use_cache = False model.enable_input_require_grads() # To avoid error https://github.com/huggingface/trl/issues/731 print_memory_usage(description="After model init") # Load tokenizer print(f"Loading tokenizer from {config.base_model_name_or_path}...") tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path) tokenizer.eos_token = "<|eot_id|>" # Hardcoded tokenizer.pad_token = "<|finetune_right_pad_id|>" # Hardcoded # Load PEFT model print(f"Loading PEFT model from {args.peft_model_id}...") model = PeftModel.from_pretrained( model, args.peft_model_id, adapter_name="trainable", is_trainable=True ) model.load_adapter(args.peft_model_id, adapter_name="reference") # Hardcoded print_memory_usage(description="After two adapters") tokenizer.chat_template = None # Configure training arguments training_args = DPOConfig( learning_rate=args.learning_rate, beta=args.beta, loss_type=args.loss_type, use_weighting=args.use_weighting, rpo_alpha=args.rpo_alpha, output_dir=args.output_dir, logging_steps=args.logging_steps, model_adapter_name="trainable", # Hardcoded ref_adapter_name="reference", # Hardcoded per_device_train_batch_size=args.batch_size, gradient_accumulation_steps=args.gradient_acc, ) # Configure Lora peft_config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.1, target_modules=['q_proj', 'v_proj', 'k_proj', 'o_proj', 'lm_head'] ) # Initialize DPO trainer print("Initializing DPO trainer...") dpo_trainer = DPOTrainer( model=model, args=training_args, tokenizer=tokenizer, train_dataset=dataset["train"], eval_dataset=dataset["test"], peft_config=peft_config, ) # Prepare everything for training model, tokenizer, train_dataset, eval_dataset = accelerator.prepare( model, tokenizer, dataset["train"], dataset["test"] ) # Train the model print("Starting training...") dpo_trainer.train() print("Training complete.") dpo_trainer.save_model() if __name__ == "__main__": parser = argparse.ArgumentParser(description="Fine-tune a model using PEFT and DPOTrainer.") parser.add_argument("--dataset_path", type=str, required=True, help="Path to the dataset file (JSONL).") parser.add_argument("--test_split", type=float, default=0.15, help="Proportion of dataset to use for testing.") parser.add_argument("--peft_model_id", type=str, required=True, help="Path to the PEFT model directory.") parser.add_argument("--load_in_8bit", action="store_true", help="Enable 8-bit quantization.") parser.add_argument("--output_dir", type=str, default="Llama31_DPO", help="Directory to save the trained model.") parser.add_argument("--logging_steps", type=int, default=1, help="Number of steps for logging during training.") parser.add_argument("--learning_rate", type=float, default=1e-6, help="Learning rate for the AdamW optimizer.") parser.add_argument("--beta", type=float, default=0.1, help="Parameter controlling deviation from the reference model.") parser.add_argument("--loss_type", type=str, default="sigmoid", help="Type of loss to use for training.") parser.add_argument("--use_weighting", action="store_true", help="Enable weighting of the loss.") parser.add_argument("--rpo_alpha", type=float, default=None, help="Alpha parameter for the RPO paper.") parser.add_argument("--batch_size", type=int, default=1, help="Batch size for training per gpu.") parser.add_argument("--gradient_acc", type=int, default=1, help="Gradient accumulation steps.") args = parser.parse_args() main(args) ``` The script crashes after being called with the following parameters: ```sh accelerate launch --num_processes=2 --num_machines=1 --mixed_precision=no --dynamo_backend=inductor dpo_finetuning.py \ --dataset_path ../dataset_generation/data/dpo_dialogues.jsonl \ --peft_model_id ../llama3.1_finetuning/output/llama3.1_SFT_from_Base/checkpoint-800 \ --output_dir ./tmp \ --logging_steps 1 \ --load_in_8bit \ --batch_size 1 \ --gradient_acc 1 ``` The full traceback is this: (sorry for the duplication, it is two processes) ``` The following values were not passed to `accelerate launch` and had defaults used instead: More than one GPU was found, enabling multi-GPU training. If this was unintended please pass in `--num_processes=1`. To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`. [2024-12-16 01:33:02,163] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2024-12-16 01:33:02,165] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda (auto detect) Warning: The cache directory for DeepSpeed Triton autotune, /home/girottopie/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path. Warning: The cache directory for DeepSpeed Triton autotune, /home/girottopie/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path. Namespace(dataset_path='../dataset_generation/data/dpo_dialogues.jsonl', test_split=0.15, peft_model_id='../llama3.1_finetuning/output/llama3. 1_SFT_from_Base/checkpoint-800', load_in_8bit=True, output_dir='./tmp', logging_steps=1, learning_rate=1e-06, beta=0.1, loss_type='sigmoid', use_weighting=False, rpo_alpha=None, batch_size=1, gradient_acc=1) Before anything: Device: cuda:0 Total Memory: 45515.00 MB Used Memory: 268.38 MB Free Memory: 45246.62 MB Namespace(dataset_path='../dataset_generation/data/dpo_dialogues.jsonl', test_split=0.15, peft_model_id='../llama3.1_finetuning/output/llama3. 1_SFT_from_Base/checkpoint-800', load_in_8bit=True, output_dir='./tmp', logging_steps=1, learning_rate=1e-06, beta=0.1, loss_type='sigmoid', use_weighting=False, rpo_alpha=None, batch_size=1, gradient_acc=1) Before anything: Device: cuda:1 Total Memory: 45515.00 MB Used Memory: 533.69 MB Free Memory: 44981.31 MB Loading dataset... Device: cuda:0 Total Memory: 45515.00 MB Used Memory: 533.69 MB Free Memory: 44981.31 MB Device: cuda:1 Total Memory: 45515.00 MB Used Memory: 533.69 MB Free Memory: 44981.31 MB Loading dataset... Loading PEFT model configuration from ../llama3.1_finetuning/output/llama3.1_SFT_from_Base/checkpoint-800... Loading base model from meta-llama/Meta-Llama-3.1-8B... Loading PEFT model configuration from ../llama3.1_finetuning/output/llama3.1_SFT_from_Base/checkpoint-800... Loading base model from meta-llama/Meta-Llama-3.1-8B... `low_cpu_mem_usage` was None, now default to True since model is quantized. `low_cpu_mem_usage` was None, now default to True since model is quantized. ^MLoading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]^MLoading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]^MLoading checkpoint shards: 25%|██▌ | 1/4 [00:03<00:11, 3.85s/it]^MLoading checkpoint shards: 25%|██▌ | 1/4 [00:03<00:11, 3.86s/it]^MLoading checkpoint shards: 50%|█████ | 2/4 [00:07<00:07, 3.70s/it]^MLoading checkpoint shards: 50%|█████ | 2/4 [00:07<00:07, 3.72s/it]^MLoading checkpoint shards: 75%|███████▌ | 3/4 [00:11<00:03, 3.66s/it]^MLoading checkpoint shards: 75%|███████▌ | 3/4 [00:11<00:03, 3.66s/it]^MLoading checkpoint shards: 100%|██████████| 4/4 [00:12<00:00, 2.67s/it]^MLoading checkpoint shards: 100%|██████████| 4/4 [00:12<00:00, 3.05s/it] ^MLoading checkpoint shards: 100%|██████████| 4/4 [00:12<00:00, 2.67s/it]^MLoading checkpoint shards: 100%|██████████| 4/4 [00:12<00:00, 3.05s/it] After model init: Device: cuda:0 Total Memory: 45515.00 MB Used Memory: 6105.69 MB Free Memory: 39409.31 MB Device: cuda:1 Total Memory: 45515.00 MB Used Memory: 6105.69 MB Free Memory: 39409.31 MB Loading tokenizer from meta-llama/Meta-Llama-3.1-8B... After model init: Device: cuda:0 Total Memory: 45515.00 MB Used Memory: 6105.69 MB Free Memory: 39409.31 MB Device: cuda:1 Total Memory: 45515.00 MB Used Memory: 6105.69 MB Free Memory: 39409.31 MB Loading tokenizer from meta-llama/Meta-Llama-3.1-8B... Loading PEFT model from ../llama3.1_finetuning/output/llama3.1_SFT_from_Base/checkpoint-800... Loading PEFT model from ../llama3.1_finetuning/output/llama3.1_SFT_from_Base/checkpoint-800... After two adapters: Device: cuda:0 Total Memory: 45515.00 MB Used Memory: 10387.69 MB Free Memory: 35127.31 MB Device: cuda:1 Total Memory: 45515.00 MB Used Memory: 6209.69 MB Free Memory: 39305.31 MB Using the `WANDB_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none). Initializing DPO trainer... /usr/local/lib/python3.10/dist-packages/peft/tuners/lora/bnb.py:355: UserWarning: Merge lora module to 4-bit linear may get different generations due to rounding errors. warnings.warn( After two adapters: Device: cuda:0 Total Memory: 45515.00 MB Used Memory: 10397.69 MB Free Memory: 35117.31 MB Device: cuda:1 Total Memory: 45515.00 MB Used Memory: 6209.69 MB Free Memory: 39305.31 MB Using the `WANDB_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none). Initializing DPO trainer... /usr/local/lib/python3.10/dist-packages/peft/tuners/lora/bnb.py:355: UserWarning: Merge lora module to 4-bit linear may get different generations due to rounding errors. warnings.warn( // Here it just fills in tqdm bars so I will skip this bit 100%|██████████| 953/953 [00:00<00:00, 9120.09 examples/s] ^MApplying chat template to eval dataset: 0%| | 0/953 [00:00<?, ? examples/s]^MApplying chat template to eval dataset: 100%|██████████| 953/953 [00:00<00:00, 17233.72 examples/s] .... Tokenizing eval dataset: 99%|█████████▉| 945/953 [00:05<00:00, 168.39 examples/s]^MTokenizing eval dataset: 100%|██████████| 953/953 [00: 05<00:00, 161.72 examples/s] Starting training... Starting training... ^M 0%| | 0/8100 [00:00<?, ?it/s][rank1]:W1216 01:34:45.892000 1621718 torch/_logging/_internal.py:1081] [0/0] Profiler function <class 'torch. autograd.profiler.record_function'> will be ignored /usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py:725: UserWarning: Graph break due to unsupported builtin None._SimpleCData. __new__. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/ C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page. html for more details) or, if it is traceable, use torch.compiler.allow_in_graph. torch._dynamo.utils.warn_once(msg) /usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py:725: UserWarning: Graph break due to unsupported builtin None._SimpleCData. __new__. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/ C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page. html for more details) or, if it is traceable, use torch.compiler.allow_in_graph. torch._dynamo.utils.warn_once(msg) [rank0]:[W1216 01:35:25.807899242 reducer.cpp:1400] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) [rank1]:[W1216 01:35:32.085567519 reducer.cpp:1400] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator()) Could not estimate the number of tokens of the input, floating-point operations will not be computed ... // Skipping here some loss metrics prompted on the first 3 samples ... ^M 0%| | 3/8100 [01:29<46:45:49, 20.79s/it]^M 0%| | 4/8100 [01:31<36:56:14, 16.42s/it][rank1]: Traceback (most recent call last): [rank1]: File "/nfsd/nldei/girottopie/NLP_DPO-Finetuning/llama3.1_dpo/dpo_finetuning.py", line 175, in <module> [rank1]: main(args) [rank1]: File "/nfsd/nldei/girottopie/NLP_DPO-Finetuning/llama3.1_dpo/dpo_finetuning.py", line 154, in main [rank1]: dpo_trainer.train() [rank1]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2122, in train [rank1]: return inner_training_loop( [rank1]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2474, in _inner_training_loop [rank1]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch) [rank1]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 3572, in training_step [rank1]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch) [rank1]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/dpo_trainer.py", line 1371, in compute_loss [rank1]: loss, metrics = self.get_batch_loss_metrics(model, inputs, train_eval="train") [rank1]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/dpo_trainer.py", line 1323, in get_batch_loss_metrics [rank1]: model_output = self.concatenated_forward(model, batch) [rank1]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/dpo_trainer.py", line 1274, in concatenated_forward [rank1]: per_token_logps = torch.gather(logits.log_softmax(-1), dim=2, index=labels.unsqueeze(2)).squeeze(2) [rank1]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.54 GiB. GPU 1 has a total capacity of 44.45 GiB of which 1.46 GiB is free. Including non-PyTorch memory, this process has 42.72 GiB memory in use. Process 1621717 has 260.00 MiB memory in use. Of the allocated memory 37.11 GiB is allocated by PyTorch, and 5.18 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/ cuda.html#environment-variables) W1216 01:36:23.080000 1621711 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 1621717 closing signal SIGTERM E1216 01:36:23.791000 1621711 torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 1 (pid: 1621718) of binary: /usr/bin/ python3 Traceback (most recent call last): File "/usr/local/bin/accelerate", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/accelerate_cli.py", line 48, in main args.func(args) File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 1159, in launch_command multi_gpu_launcher(args) File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 793, in multi_gpu_launcher distrib_run.run(args) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 910, in run elastic_launch( File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 138, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 269, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ dpo_finetuning.py FAILED Failures: <NO_OTHER_FAILURES> ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2024-12-16_01:36:23 host : gpu1.dei.unipd.it rank : 1 (local_rank: 1) exitcode : 1 (pid: 1621718) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================ ``` **STACK TRACE TLDR:** Everything seems to be ok, it can even train on a couple of examples before running out of VRAM. I think that @qgallouedec might be onto something, as my prompts and responses are quite lenghty. I have noticed that when pre-processing the dataset the trainer will add a crazy amount of padding tokens also. How can I check if the length of the examples is the culprit? The examples are formatted by the trainer only once the `trainer.train()` method is called. *NOTE*: I cannot afford to truncate the samples' text, as it is critical to have sometimes those lengthy prompts+answer pairs during training.
2,452
gp-1108
2024-12-20T12:02:25
Hi, I have solved the issue finally and I am going to leave it here for the posterity. The issue lay mainly in two things: 1. **Some samples were too long** 2. **The PEFT configuration was not working** **MANAGING SAMPLE LENGTH**: I plotted the lengths across a couple of metrics: ![image](https://github.com/user-attachments/assets/3be16126-de18-4c3a-b4f4-1e032e8e7444) ``` [INFO] Prompt lengths Min length: 22 Max length: 5541 Mean length: 588.0766687657431 Median length: 569.0 STD length: 419.24555148568976 [INFO] Chosen response lengths Min length: 47 Max length: 4826 Mean length: 192.51637279596977 Median length: 183.0 STD length: 99.76849327730292 [INFO] Rejected response lengths Min length: 29 Max length: 185 Mean length: 71.0676952141058 Median length: 69.0 STD length: 17.396042841024304 [INFO] Overall lengths (prompt + max(chosen, rejected) Min length: 81 Max length: 5782 Mean length: 780.6544395465995 Median length: 764.0 STD length: 435.2110251509147 ``` You can clearly see that in some cases we get up to 6k length. This is perhaps not ideal. I have eliminated those from the dataset by using a [modified z-score](https://www.statology.org/modified-z-score/) but you can choose whatever you prefer. Afterwards, the maximum length was 2k which is a manageable. **PEFT CONFIGURATION**: I thought that by passing the `peft_config` param to the `DPOTrainer` it would automatically take care of it. However, upon closer inspection I could see in the logs that once saving the model I would get: ``` UserWarning: Setting `save_embedding_layers` to `True` as embedding layers found in `target_modules`. ``` Even though my peft configuration did not include the embedding layer in the targets. ```python peft_config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.1, target_modules=['q_proj', 'v_proj', 'k_proj', 'o_proj', 'lm_head'] ) ``` I resorted to the good old `get_peft_model` method from `peft`. The final setup for the model was as follows: ```python peft_config = LoraConfig( r=16, lora_alpha=32, lora_dropout=0.1, target_modules=['q_proj', 'v_proj', 'k_proj', 'o_proj', 'lm_head'] ) model = AutoModelForCausalLM.from_pretrained( config.base_model_name_or_path, quantization_config=bnb_config, trust_remote_code=True, # Hardcoded torch_dtype=torch.bfloat16, ) model.config.use_cache = False model.enable_input_require_grads() # To avoid error https://github.com/huggingface/trl/issues/731 model = PeftModel.from_pretrained( model, args.peft_model_id, adapter_name="trainable", is_trainable=True ) model.load_adapter(args.peft_model_id, adapter_name="reference") model = get_peft_model(model, peft_config) ``` Also avoiding the `peft_config` param in the `DPOTrainer` altogether. I don't know if this is an issue or intended behaviour @qgallouedec **OTHER IMPROVEMENTS**: Although I already implemented these in the previous steps, I would like to clarify that setting `per_device_train_batch_size=1` and `gradient_accumulation_steps=4` was also a key part of the solution. Now I am getting a solid 80-90% VRAM usage without any disruption.
2,452
Kallinteris-Andreas
2024-12-08T23:00:17
What is the reason that your model is not a `torch.nn.Module`, my first reaction would be that you are doing something wrong, unless you provide a detailed explanation as to why can you convert your model to a `torch.nn.Module`? but if you have a good reason for a custom reward model class, you would have to modify all usages of `self.reward_model` in https://github.com/huggingface/trl/blob/9c5388b69e0842f76edc46a2ff9d0b51e1db4337/trl/trainer/ppo_trainer.py
2,451
hwhyyds
2024-12-09T06:30:26
In my code, I have trained a reward model with three outputs was in a format similar to '{"type1": 1, "type2": -1, "type3": 0}', which is different from the traditional output
2,451
asparius
2024-12-10T14:02:16
> In my code, I have trained a reward model with three outputs was in a format similar to '{"type1": 1, "type2": -1, "type3": 0}', which is different from the traditional output I believe you are doing some sort of classification, so you could still have nn-based module for classification part and then map its results to your predefined-rewards. You could use it per-token or per-completion depending on your policy optimization method
2,451
hwhyyds
2024-12-16T10:58:18
Such as my scores from GPT-4o, can't be used by nn-based module
2,451
kashif
2024-12-11T12:31:48
thanks @NIL-zhuang great catch!
2,450
HuggingFaceDocBuilderDev
2024-12-11T12:37:37
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2450). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,450
qgallouedec
2024-12-11T12:45:14
Do we test this collator somewhere?
2,450
kashif
2024-12-11T12:50:49
we do test it.. but more for the labels rather than the content of the ids... let me see if i can add a failing test
2,450
kashif
2024-12-11T13:18:21
@qgallouedec added failng tests
2,450
asparius
2024-12-10T14:23:57
It is the entropy of the generated sequence by the policy given the prompt. Do you intend to measure another thing?
2,448
hubstrauss
2024-12-10T15:36:05
Oh I see, my bad - as the tokens were sampled from the model, you can get a sample based estimation of entropy. Thanks ! So but then, why is the default value of INVALID_LOGPROB set to 1 ? When computing `-logprobs`, these masked tokens contribute -1 each to the sum ?
2,448
asparius
2024-12-10T18:54:38
This was already mentioned in #2281.
2,448
HuggingFaceDocBuilderDev
2024-12-06T12:33:48
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2447). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,447
anhuong
2024-12-06T17:30:29
Does this also need to be updated in the [requirements.txt](https://github.com/huggingface/trl/blob/main/requirements.txt#L4)? as it still shows `transformers>=4.46.0`
2,447
kashif
2024-12-06T18:09:57
@anhuong I believe the `requirements.txt` is used by the CI and the issue is fixed in the main branch...
2,447
qgallouedec
2024-12-06T12:24:20
Thanks for reporting. A patch release is coming asap. In the meantime, downagrading transformers to 4.46 should work. ``` pip install transformers==4.46 ``` Related to https://github.com/huggingface/trl/pull/2381 Keeping the issue open until the release
2,445
gp-1108
2024-12-06T14:59:42
Thanks @qgallouedec, downgrading to the specified version worked!
2,445
qgallouedec
2024-12-13T22:36:16
Solved with https://github.com/huggingface/trl/releases/tag/v0.12.2 ``` pip install --upgrade trl ```
2,445
pspdada
2024-12-20T04:02:00
> Solved with https://github.com/huggingface/trl/releases/tag/v0.12.2 > > ``` > pip install --upgrade trl > ``` Hello, I've noticed that the issue with the latest version of trl==0.13.0 has resurfaced. Since in this version, the requirement has reverted to "transformers>=4.46.0", this problem has reappeared. Could the trl code be fixed to be compatible with the new version of transformers?
2,445
qgallouedec
2024-12-20T10:29:41
This issue should be fixed in 0.13. Can you share your system info? (`trl env`)
2,445
pspdada
2024-12-20T11:56:57
> This issue should be fixed in 0.13. Can you share your system info? (`trl env`) I understand what happened with the changes in this part; it was due to an error in my implementation. I apologize for the disturbance.
2,445
HuggingFaceDocBuilderDev
2024-12-05T19:15:12
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2443). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,443
HuggingFaceDocBuilderDev
2024-12-05T18:54:01
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2442). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,442
HuggingFaceDocBuilderDev
2024-12-05T15:00:27
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2441). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,441
qgallouedec
2024-12-06T09:08:45
Thanks! can you approve?
2,441
HuggingFaceDocBuilderDev
2024-12-05T14:24:42
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2440). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,440
qgallouedec
2024-12-04T18:15:42
LGTM, thanks :)
2,439
dame-cell
2024-12-10T15:19:45
not really done yet but for now everything seems to be working if padding_free is set to True the trainer will not pad and also when padding_free =True attention_mask will not be used for now here are some task to be done : - [x] Ensure when padding_Free =True the trainer will not pad - [x] Ensure that when padding_free = True the trainer will not use or return attention_mask - [x] Ensure that when padding_free = True we use positon_ids - [x] make tests
2,437
dame-cell
2024-12-11T13:02:27
most of the stuff is done just some small stuff left like dealing with list and converting to tensor
2,437
dame-cell
2024-12-11T14:43:58
Hey @osanseviero, The main idea for using padding_free is mostly in place now, but there are still a few things that need to be done. It would be awesome if you could take a look at the code and let me know if there's anything else I should address or add. I've made it so the user can directly do this ```python trainer = DPOTrainer( model=self.model, ref_model=None, args=training_args, tokenizer=self.tokenizer, padding_free=True, # when true it will not use any padding train_dataset=dummy_dataset["train"], eval_dataset=dummy_dataset["test"], ) ```
2,437
dame-cell
2024-12-13T12:21:52
@osanseviero I think this fixes it sorry for the small mistake I have been making , thanks for your patience
2,437
dame-cell
2024-12-13T14:44:55
still more work to be done not really ready yet
2,437
dame-cell
2024-12-13T16:10:14
the padding_free finally can train , it works but it's not optimised not really clean code yet better refactoring of code will do this by tommorow
2,437
qgallouedec
2024-12-14T13:31:42
I'm not sure that having the padding-free logic in the collator is the best option here, since we concatenate prompt and completions in the trainer. Maybe the easiest is to have everything inside the `concatenated_forward` method.
2,437
dame-cell
2024-12-14T13:35:55
> I'm not sure that having the padding-free logic in the collator is the best option here, since we concatenate prompt and completions in the trainer. Maybe the easiest is to have everything inside the `concatenated_forward` method. question is since the concatenated_inputs depends on the collator do i just create position ids and all those stuff in the concatenated_forward??
2,437
qgallouedec
2024-12-14T13:37:09
Is sounds like the easiest way. Or am I missing something?
2,437
dame-cell
2024-12-14T13:37:54
yes you are right I think this will be done by today hopefully
2,437
qgallouedec
2024-12-14T13:38:46
> depends on the collator I don't think it depends on the collator. How would it?
2,437
qgallouedec
2024-12-14T13:55:21
Here https://github.com/huggingface/trl/blob/6d4ed070f1f53a87fb3cff2eb82a56db093bccc6/trl/trainer/dpo_trainer.py#L1115-L1123 After the flushing left, we could remove pad tokens, and add position ids: ```python # Flush left to reduce the memory usage # [[0, 0, x, x, x, x], -> [[x, x, x, x], # [0, x, x, x, 0, 0]] [x, x, x, 0]] for i in range(attention_mask.size(0)): first_one_idx = torch.nonzero(attention_mask[i])[0].item() input_ids[i] = torch.roll(input_ids[i], shifts=-first_one_idx) attention_mask[i] = torch.roll(attention_mask[i], shifts=-first_one_idx) loss_mask[i] = torch.roll(loss_mask[i], shifts=-first_one_idx) if self.padding_free: # input = input = pos_ids = input = pod_ids = # [[x, x, x, x], -> [[x, x, x, x], and [[0, 1, 2, 3], -> [x, x, x, x, x, x, x] and [0, 1, 2, 3, 0, 1, 2] # [x, x, x, 0]] [x, x, x]] [0, 1, 2]] ... # code here
2,437
dame-cell
2024-12-14T14:00:02
all right awesome actually this make more sense 😭
2,437
dame-cell
2024-12-14T16:31:41
before I push my code again I want to benchmark this with padding and padding_free just to show the performance
2,437
qgallouedec
2024-12-14T16:39:28
You can push it, no worry we can still refine after
2,437
dame-cell
2024-12-15T11:33:37
Thank you for your understanding! I wanted to let you know that I’m a bit tied up today and tomorrow, so I might not be able to push the code right away. I’ll try to get to it as soon as possible, but please feel free to let me know if there’s a hard deadline I should prioritize. Thanks for your patience! I'll keep working on it so I'll try to push it by Tommorow if i can
2,437
qgallouedec
2024-12-15T11:35:36
No rush on our side :)
2,437
dame-cell
2024-12-17T13:34:51
all right so I think this does it I did check if we can train this on a single T4 gpu colab notebook now using the examples scripts provided the file `trl/scripts/dpo.py` with a bit of update I was able to train a model using the padding_Free =True ```python python trl/examples/scripts/dpo.py \ --dataset_name trl-lib/ultrafeedback_binarized \ --model_name_or_path Qwen/Qwen2-0.5B-Instruct \ --learning_rate 5.0e-6 \ --num_train_epochs 1 \ --per_device_train_batch_size 2 \ --gradient_accumulation_steps 8 \ --gradient_checkpointing \ --logging_steps 1 \ --output_dir Qwen2-0.5B-DPO \ --no_remove_unused_columns \ --use_peft \ --lora_r 32 \ --lora_alpha 16 ``` without padding_free it kept saying OOM is this normal or what ? I have not updated the docs yet because I'm not 100 % sure this one works or is correct until after a review
2,437
dame-cell
2024-12-19T02:21:49
@osanseviero Just wanted to follow up on this PR and see if there’s any feedback so far. I’m happy to clarify anything or make updates if needed. Let me know whenever you get a chance—thanks so much for your time! 🙌
2,437
qgallouedec
2024-12-19T13:33:33
You still need to revert the changes applied to PPO files. And apply pre-commits
2,437
dame-cell
2024-12-22T14:03:19
@qgallouedec Train a qwen model for only 10 steps using both padding_Free =Trrue and padding_free =False with a batch_size of 1 with no gradient accumuatlion on goggle colab notebook T4 GPU GPU peak memory | Metric | Padding Free=False | Padding Free=True | |-----------------|--------------------|-------------------| | PEAK MEMORY | 13.9 | 9.0 | here is the lasts steps metrics when training | Metric | Padding Free=False | Padding Free=True | |----------------------|--------------------|-------------------| | Loss | 0.6939 | 0.6914 | | Grad Norm | 4.1626 | 5.1364 | | Rewards/Chosen | -0.00055 | 0.00207 | | Rewards/Rejected | 0.00093 | -0.00152 | | Rewards/Accuracies | 0.0 | 1.0 | | Rewards/Margins | -0.00148 | 0.00359 | | LogPs/Chosen | -57.3851 | -57.3589 | | LogPs/Rejected | -29.9984 | -27.7588 | | Logits/Chosen | -2.8899 | -2.8894 | | Logits/Rejected | -2.7367 | -2.3579 | | Epoch | 0.01 | 0.01 | There still appear to be noticeable differences between the Rewards/Chosen and Rewards/Rejected metrics. Despite my efforts to resolve this,I just could not fix it with gradient accumulation of 8 you can fit upto 4 batch for padding_free =True
2,437
qgallouedec
2024-12-22T14:08:21
(Please stop tagging osanseviero? Unless you've a good reason. He is not involved here, please don't bother him 🙏)
2,437
qgallouedec
2024-12-04T14:52:31
Thanks @AMindToThink for the addition. Strictly speaking, you can't convert a language-modelling dataset (only text column) to a prompt-completion dataset, because you'd have to be able to extract the prompt from it. I'm afraid that adding this part to the documentation will create confusion. The workaround you use is to have an empty prompt column. Which is a bit strange. How about instead making the algorithms natively support this new type in which you only have a `text` column and a `label` colone?
2,436
qgallouedec
2024-12-05T21:12:24
Thanks for reporting. Could you please share your system info? Also I'll need a sample of your dataset to be able to reproduce it. Please make your best to minimize the code as much as you can (make things so much easier)
2,435
scarafoni
2024-12-03T23:31:29
another update- the issue is occurring on this line in rloo_trainer.py ```python self.eval_dataloader = DataLoader( self.eval_dataset, batch_size=args.per_device_eval_batch_size, collate_fn=DataCollatorWithPadding(self.processing_class), drop_last=True, ) # no need to shuffle eval dataset ``` The dataloader is of length 0 but the dataset is not
2,434
scarafoni
2024-12-05T17:28:38
I solved the problem- the code was being run on a dummy test set and the dataloader was estimating the length to be 0 because there were not enough samples in it
2,434
HuggingFaceDocBuilderDev
2024-12-03T19:22:09
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2433). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,433
kashif
2024-12-03T19:24:15
is it because we have effectively 2 x bigger batches from the chosen and rejected pairs?
2,433
qgallouedec
2024-12-03T20:31:15
Probably..., merging as it seems to resolve the issue.
2,433
HuggingFaceDocBuilderDev
2024-12-03T12:17:35
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2432). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,432
HuggingFaceDocBuilderDev
2024-12-03T09:55:17
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2431). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,431
dakru012
2024-12-03T08:16:49
I think these changes alone don't fix the issue, the `PreferenceCollator` also needs small changes. Can I somehow directly add changes to the PR?
2,430
kashif
2024-12-03T08:20:27
thanks @dakru012 make a new PR with your fixes and ignore this one and then we can merge both separately, perhaps that is the easiest.
2,430
HuggingFaceDocBuilderDev
2024-12-03T08:46:58
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2430). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,430
qgallouedec
2024-12-03T09:43:49
You're right @dakru012. A change in collator is also needed. Closing this one in favour of #2431
2,430
qgallouedec
2024-12-03T10:18:50
Thanks for suggesting this enhancement @jc-ryan. I agree it would be a great addition. cc @Rocketknight1 > 1. Support for processing and encoding messages with user, assistant, and tool roles (already supported by the tokenizer library) It would probably be necessary to update the functions in `trl/data_utils.py` to include `tool` arg. This could be done as a first step. > 2. Loss calculation specifically for assistant parts (including both general messages and tool-calling messages) Can you elaborate on this part? Should the loss be calculated differently for the tool part? > Alternatively, is it possible to implement function calling fine-tuning using existing SFTTrainer features? For example, through a custom collate_fn similar to vlm_sft? I'm not sure I understand this point. With SFT at least, training is offline, so you shouldn't have to call the tools, right?
2,429
jc-ryan
2024-12-03T12:06:31
> Loss calculation specifically for assistant parts (including both general messages and tool-calling messages)" @qgallouedec Sorry for not being clear earlier. Regarding loss calculation, for tool using, there are several role messages including **system**, **user**, **assistant (tool_call)**, **tool (tool call results)**, and **assistant (plain text)**. We should mainly calculate the loss for the **assistant (tool_call)** and **assistant (plain text)** parts while masking the loss for other roles. For the specific data format, you can refer to mistral: https://docs.mistral.ai/capabilities/finetuning/#2-function-calling-instruct > Alternatively, is it possible to implement function calling fine-tuning using existing SFTTrainer features? For example, through a custom collate_fn similar to vlm_sft? This part means I noticed that in the SFTTrainer documentation, you can write custom collate_fn for vlm_sft to support multimodal model input formats and loss masking (https://huggingface.co/docs/trl/sft_trainer#a-custom-collator-for-processing-multi-modal-data). So I'm not sure if this same approach can be applied to handle function calling data.
2,429
qgallouedec
2024-12-03T18:00:47
Thanks for the clarification. Yes, you're right, the loss is calculated over the entire sequence by default (assistant and user). I think it's ok as a first approach, even if it's probably better to hide these parts in the loss. For any contributor who wants to tackle this improvement, I suggest (one PR at a time) 1. update the functions in `trl/data_utils.py` to include tool arg. #2455 2. update the trainers one by one to include tooling in their preprocessing stage (start with SFT) 3. find a clean way to hide the “tool” part of the assistant in the loss (data collator isn't IMHO). As a preliminary step, I'd also say prove that it's really worth doing.
2,429
qgallouedec
2024-12-03T10:58:43
Thanks for this addition @shirinyamani! Is there a simple way to test it? Ideally a small piece of code that would fail before this PR but passes after? We probably won't be able to add it to the tests because it requires multi-gpu, but at least we'll have it ready. (I'll have to reinvest in multi-gpu testing in the future)
2,427
shirinyamani
2024-12-03T21:13:26
The simplest way might be testing it with `assert` to make sure the `load_in_4bit` is the case? what do think abt it ? I can also look into multiple gpu set up! @qgallouedec > Thanks for this addition @shirinyamani! > > Is there a simple way to test it? Ideally a small piece of code that would fail before this PR but passes after? We probably won't be able to add it to the tests because it requires multi-gpu, but at least we'll have it ready. (I'll have to reinvest in multi-gpu testing in the future)
2,427