Search is not available for this dataset
user
stringlengths
3
28
created_at
timestamp[us]
body
stringlengths
1
173k
issue_number
int64
1
2.51k
qgallouedec
2024-12-04T14:57:40
To clarify: I think it's okay not to add a test in our unit test for this PR (because it's specific to multi-gpu configuration, and it's not trivial to set up with GitHub actions, but we'll do it anyway in the future). However, we should check “by hand” that it works. Do you have a small command line / example script that I can run in my local multi-gpu configuration to check that it works as expected?
2,427
qgallouedec
2024-12-04T18:36:27
Btw if you don't have multi-gpu setting available, feel free to share a code that might not be correct, I can test it quickly on my side.
2,427
shirinyamani
2024-12-04T21:29:56
Rn I can think of two approaches both when we do NOT want to run manually using `torch.distributed.run`; 1. So I just checked we have nice `accelerate_config` files in `trl` so I think what we can do is we can launch the `multi_gpu` config from `examples/accelerate_configs` then set up the `sft` modeling; (preferred approach/ not sure abt outcome as I do not have access to multiple gpu atm!) > accelerate launch --config_file examples/accelerate_configs/multi_gpu.yaml --num_processes=8 examples/scripts/sft.py --model_name_or_path Qwen/Qwen2-0.5B --dataset_name trl-lib/Capybara 2. (Less sure/feasible approach), is to directly launch the multi-gpu from accelerate lib; the command for say 8 GPUs would be the following but not really sure if it is correct, I think it is not tho; [reference](https://github.com/huggingface/accelerate) > accelerate launch --multi_gpu --num_processes 8 sft --model_name_or_path Qwen/Qwen2.5-0.5B --dataset_name trl-lib/Capybara --output_dir Qwen2.5-0.5B-SFT
2,427
kashif
2024-12-02T14:26:46
also if you can kindly add this config to the tests where we test with `pre_compute` flag, then we can also have it tested
2,426
qgallouedec
2024-12-02T14:29:31
Good point @kashif. We cannot really test if it works, but at least, we can check that it doesn't fail when this arg is passed. ```python def test_precompute_ref_batch_size(self): with tempfile.TemporaryDirectory() as tmp_dir: training_args = DPOConfig( output_dir=tmp_dir, per_device_train_batch_size=2, precompute_ref_log_probs=True, precompute_ref_batch_size=4, report_to="none", ) dummy_dataset = load_dataset("trl-internal-testing/zen", "standard_preference") trainer = DPOTrainer( model=self.model, ref_model=self.ref_model, args=training_args, processing_class=self.tokenizer, train_dataset=dummy_dataset["train"], eval_dataset=dummy_dataset["test"], ) previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()} trainer.train() self.assertIsNotNone(trainer.state.log_history[-1]["train_loss"]) # check the params have changed for n, param in previous_trainable_params.items(): new_param = trainer.model.get_parameter(n) # check the params have changed - ignore 0 biases if param.sum() != 0: self.assertFalse(torch.allclose(param, new_param, rtol=1e-12, atol=1e-12)) ```
2,426
HuggingFaceDocBuilderDev
2024-12-02T15:11:29
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2426). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,426
qgallouedec
2024-12-03T18:11:37
Thanks for raising the question. @gaetanlop is probably best qualified to answer this question.
2,425
fzyzcjy
2024-12-03T23:23:40
Thanks for routing!
2,425
qgallouedec
2024-12-02T08:24:47
Which method does your question refer to?
2,424
NUMB1234
2024-12-02T08:52:20
just sft, during the training process
2,424
qgallouedec
2024-12-02T09:18:39
By default the loss is computed over the whole sequence. You can make sft ignore some parts of the sequence in the loss by using a custom data collator (see sft doc)
2,424
NUMB1234
2024-12-02T10:07:10
I'm sorry, perhaps I didn't explain my question clearly. For multi-turn dialogue data in this format: {"messages": [{"role": "system", "content": "..."}, {"role": "user", "content": "..."}, {"role": "assistant", "content": "..."}, {"role": "user", "content": "..."}, {"role": "assistant", "content": "..."}, ...]} Does this project calculate the loss only for the last assistant's response, or calculate the loss for all assistant responses? I couldn't find an explanation in the documentation.
2,424
qgallouedec
2024-12-02T10:12:49
Neither. By default, the loss is computed on the full sequence, ie all assistant and user messages.
2,424
NUMB1234
2024-12-02T10:26:02
thanks a lot
2,424
qgallouedec
2024-12-02T14:06:58
That's a good catch, thanks @dakru012! Do you want to submit a PR to fix it?
2,423
SwayamInSync
2024-12-02T14:37:17
> That's a good catch, thanks @dakru012! Do you want to submit a PR to fix it? I think these lines within `concatenated_forward` are the culprit, names should be [`ref_chosen_logps`, `ref_rejected_logps`] instead of [`chosen_logps`, `rejected_logps`] then need to handle the same case at `compute_ref_log_probs` function ```python output["chosen_logps"] = all_logps[:num_examples] output["rejected_logps"] = all_logps[num_examples:] ``` Let me know if the PR is there otherwise I can include the relevant fixes inside #2426 or made a new one
2,423
dakru012
2024-12-02T15:00:29
@SwayamInSync I don't think that's the problem. I will take a look at it again and do a PR, but it is already midnight here so I gotta sleep first 😴
2,423
SwayamInSync
2024-12-03T02:34:56
While looking on another issue in transformers library, I think this function was the problem of this issue ```python def _set_signature_columns_if_needed(self): # If `self.args.remove_unused_columns` is True, non-signature columns are removed. # By default, this method sets `self._signature_columns` to the model's expected inputs. # In DPOTrainer, we preprocess data, so using the model's signature columns doesn't work. # Instead, we set them to the columns expected by `DPODataCollatorWithPadding`, hence the override. if self._signature_columns is None: self._signature_columns = ["prompt_input_ids", "chosen_input_ids", "rejected_input_ids", "image_sizes"] ``` It is used to remove the unused columns and since there is no `ref` columns are defined here so they all get removed, changing above to ```python def _set_signature_columns_if_needed(self): # If `self.args.remove_unused_columns` is True, non-signature columns are removed. # By default, this method sets `self._signature_columns` to the model's expected inputs. # In DPOTrainer, we preprocess data, so using the model's signature columns doesn't work. # Instead, we set them to the columns expected by `DPODataCollatorWithPadding`, hence the override. if self._signature_columns is None: self._signature_columns = ["prompt_input_ids", "chosen_input_ids", "rejected_input_ids", "image_sizes", "ref_chosen_logps", "ref_rejected_logps"] ``` Fixed that condtion check cc: @dakru012
2,423
dakru012
2024-12-03T02:48:37
@SwayamInSync That is a good find, I overlooked that one and just set `remove_unused_columns` to False. I will test it and check the others trainers if there are similar problems. I think there is also an small error in the data_collator description. It says that `DPODataCollatorWithPadding` is the default collator, but it seems to be `PreferenceCollator` now.
2,423
SwayamInSync
2024-12-03T03:20:36
> @SwayamInSync That is a good find, I overlooked that one and just set `remove_unused_columns` to False. I will test it and check the others trainers if there are similar problems. > > I think there is also an small error in the data_collator description. It says that `DPODataCollatorWithPadding` is the default collator, but it seems to be `PreferenceCollator` now. Hey awesome and yes, the documentation about collator is misleading there, I would drop a quick fix to both in a PR later, please feel free to add any modifications needed
2,423
qgallouedec
2024-12-01T10:09:58
Thanks @fzyzcjy! Can you elaborate a bit? What is this padding free method?
2,422
fzyzcjy
2024-12-01T10:20:24
Oh sorry I provided the wrong link, now the link is updated to point to the correct "padding_free" article
2,422
qgallouedec
2024-12-01T14:05:38
Thanks for the pointer. This would be nice addition! Any contribution is welcome. I mark this one as good second issue
2,422
qgallouedec
2024-12-01T14:12:37
The guideline is basically to: 1. Update `PreferenceCollator` to add `padding_free` like in https://github.com/huggingface/trl/pull/1887 2. Update `concatenated_inputs` to (a) make `xxx_attention_mask` optional and add support for `xxx_position_ids` 3. Add a test
2,422
fzyzcjy
2024-12-01T14:19:15
Thank you!
2,422
dame-cell
2024-12-01T14:58:10
- should `padding_free` in `PreferenceCollator ` be like kind of a optional arguement or like keep it default ? but why make `xxx_attention_mask' optional ? is it because padding-free sequences might not use attention masks at all.? for example - In regular training with padding, attention_masks are needed to tell the model which tokens are real and which are padding (0s for padding, 1s for real tokens) - In padding-free training, since we remove all padding tokens, every token is a real token, so we don't need explicit masks to distinguish between real and padding does this make sense? Thank you for your patience - I wanted to verify that I understand these concepts correctly
2,422
qgallouedec
2024-12-01T15:02:21
I think it makes sense yes.
2,422
dame-cell
2024-12-01T15:03:37
@fzyzcjy @qgallouedec if no one is working on this I would like to help
2,422
fzyzcjy
2024-12-01T23:58:27
@dame-cell I do not have time recently for that, and your PR would be great and many thanks!
2,422
zwhe99
2024-12-02T09:10:10
Is it possible for PPO to support padding_free?
2,422
qgallouedec
2024-12-01T09:34:31
Thanks for this suggestion @SwayamInSync! Do you have any idea of the gain in speed? If you've a working implementation, feel free to submit a PR so that we can test and discuss the code
2,421
SwayamInSync
2024-12-02T14:20:00
> Thanks for this suggestion @SwayamInSync! Do you have any idea of the gain in speed? If you've a working implementation, feel free to submit a PR so that we can test and discuss the code Made a PR at #2426 From a quick test on my settings I can fit only a batch size upto 8 (get OOM elsewise) but with this new parameter, inference batch size can go upto 32 (instead of same as train, so pretty better than before I guess)
2,421
HuggingFaceDocBuilderDev
2024-11-30T11:12:55
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2419). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,419
qgallouedec
2024-12-03T18:14:51
Yes, that's the right way to do it. Your code looks fine, feel free to share if you get an error running it.
2,418
HuggingFaceDocBuilderDev
2024-11-29T18:20:14
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2417). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,417
qgallouedec
2024-11-29T19:57:22
CI is down for the moment due `429 Client Error: Too Many Requests` when trying to download the dataset for testing. Not sure how to solve it.
2,417
qgallouedec
2024-11-30T11:17:27
> CI is down for the moment due `429 Client Error: Too Many Requests` when trying to download the dataset for testing. Not sure how to solve it. Fixed! There is one test still failing (expected, see https://github.com/huggingface/trl/pull/2413#issuecomment-2507953395)
2,417
yiyepiaoling0715
2024-12-05T10:04:56
good job! when will be merged? waiting online! hah
2,417
qgallouedec
2024-11-30T13:42:28
> `accelerate launch --config_file deepspeed_zero2.yaml` have you fixed #2410? I can't really reproduce since I can't make online dpo work with deepspeed rn
2,416
qgallouedec
2024-12-03T18:18:07
Thanks for the suggestion. I think we already support it via the [`LogCompletionsCallback`](https://huggingface.co/docs/trl/en/callbacks#trl.LogCompletionsCallback). But not with an arg in script though. Can you confirm it's about adding an arg in the scripts (not in the trainers' config)
2,415
HuggingFaceDocBuilderDev
2024-11-29T09:33:09
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2414). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,414
kashif
2024-11-29T09:33:14
@lewtun would we need the same warning in the other trainers?
2,414
qgallouedec
2024-11-29T11:26:21
Great! Thanks @chenweize1998! Can you try to uncomment https://github.com/huggingface/trl/blob/ac267781ec20a421e07c17c7f2f5670f9a56d41c/tests/test_dpo_trainer.py#L1142
2,413
HuggingFaceDocBuilderDev
2024-11-29T12:51:19
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2413). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,413
qgallouedec
2024-11-29T14:45:01
As expected, "Tests / Tests with dev dependencies" will fail until https://github.com/huggingface/transformers/pull/34953 is merged. We can safely ignore this failing test.
2,413
HuggingFaceDocBuilderDev
2024-11-28T20:40:35
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2412). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,412
HuggingFaceDocBuilderDev
2024-11-28T19:00:00
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2411). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,411
qgallouedec
2024-11-28T19:05:15
Link to authors are wrong
2,411
burtenshaw
2024-11-29T09:27:44
@qgallouedec Thanks for the review. I've responded.
2,411
zcw0201
2024-11-28T17:41:06
Sorry, I use "deepspeed_zero2.yaml" and it should be !ACCELERATE_LOG_LEVEL=info accelerate launch --config_file deepspeed_zero2.yaml online_dpo.py --model_name_or_path mistralai/Mistral-7B-v0.1 --reward_model_path Ray2333/GRM-Llama3.2-3B-rewardmodel-ft --dataset_name nvidia/HelpSteer2 --learning_rate 5.0e-6 --output_dir pythia-1b-tldr-online-dpo --per_device_train_batch_size 16 --gradient_accumulation_steps 8 --warmup_ratio 0.1 --missing_eos_penalty 1.0 --use_peft
2,410
qgallouedec
2024-11-28T17:43:17
Thanks for reporting. Please share your system info (`trl env`)
2,410
zcw0201
2024-11-28T17:44:24
/home/ec2-user/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/utils/hub.py:128: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead. warnings.warn( Copy-paste the following information when reporting an issue: - Platform: Linux-5.10.228-219.884.amzn2.x86_64-x86_64-with-glibc2.26 - Python version: 3.10.14 - PyTorch version: 2.2.2 - CUDA device(s): NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB, NVIDIA A100-SXM4-80GB - Transformers version: 4.46.3 - Accelerate version: 0.34.2 - Accelerate config: not found - Datasets version: 3.1.0 - HF Hub version: 0.26.2 - TRL version: 0.13.0.dev0 - bitsandbytes version: 0.44.1 - DeepSpeed version: 0.16.0 - Diffusers version: not installed - Liger-Kernel version: not installed - LLM-Blender version: not installed - OpenAI version: not installed - PEFT version: 0.13.2
2,410
sergiopaniego
2024-11-28T16:52:15
Working example: [notebook](https://colab.research.google.com/drive/1ST017t2yLphjxGGMTzKmwevdsGl5B7aL?usp=sharing) The changes could also be integrated in `sft_vlm.py` since they are small: * A new import for the model * Below image: ![image](https://github.com/user-attachments/assets/36e7657c-b05a-46e5-95c6-4ca9d9621af6)
2,409
qgallouedec
2024-11-28T16:58:10
Looks good to me, thanks a lot @sergiopaniego If by any chance you have model or curves to share?
2,409
HuggingFaceDocBuilderDev
2024-11-28T16:58:15
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2409). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,409
sergiopaniego
2024-11-28T17:04:59
I trained a model last night since I'm working on a recipe 🤗 Model: https://huggingface.co/sergiopaniego/smolvlm-base-instruct-trl-sft-ChartQA Tensorboard: https://huggingface.co/sergiopaniego/smolvlm-base-instruct-trl-sft-ChartQA/tensorboard ![Captura de pantalla 2024-11-27 a las 21 19 51](https://github.com/user-attachments/assets/78c7e9f8-d7e2-4b78-85e9-5f166a5e78ad)
2,409
qgallouedec
2024-11-28T17:07:11
you rock!
2,409
HuggingFaceDocBuilderDev
2024-11-28T14:58:54
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2407). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,407
qgallouedec
2024-11-28T15:21:16
Can probably be improved, but at least there is a doc for it now.
2,407
qgallouedec
2024-12-02T14:09:47
Thanks for the suggestion. Fell free to open a PR to add this
2,406
HuggingFaceDocBuilderDev
2024-12-01T13:04:11
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2405). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,405
qgallouedec
2024-12-04T13:20:35
FSDP + QLora from https://github.com/huggingface/peft/blob/main/examples/sft/run_peft_qlora_fsdp.sh green: main purple: pr <img width="1058" alt="Screenshot 2024-12-04 at 14 18 09" src="https://github.com/user-attachments/assets/337f39ab-ebf6-44a2-8878-949c45257c16">
2,405
HuggingFaceDocBuilderDev
2024-11-27T15:44:54
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2402). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,402
qgallouedec
2024-11-28T18:59:36
Closing in favour of #2411
2,402
qgallouedec
2024-12-13T22:51:33
You're right, the documentation is wrong. Would you like to contribute by correcting it?
2,400
umbilnm
2024-12-25T10:20:41
Hi! I can correct it. Based on the discussion, it seems we could take one of two approaches: 1) Completely remove this mention from the “Best Practices” section 2) Update the text to clarify that truncation (rather than padding) happens by default. Could you let me know which approach is better?
2,400
qgallouedec
2024-12-25T10:49:41
Probably the 2. What do you think?
2,400
umbilnm
2024-12-25T11:08:13
Ok, also if max_seq_len isn't specified the trainer sets it to `min(1024, tokenizer.model_max_lenght)` (not 2048), so revised text may look like this: > SFTTrainer truncates sequences by default to the max_seq_length specified. If max_seq_length is not provided, the trainer sets it to the minimum of tokenizer.model_max_length and 1024. Ensure you verify this setting before training to avoid unintended behavior.
2,400
HuggingFaceDocBuilderDev
2024-11-26T18:49:29
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2399). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,399
HuggingFaceDocBuilderDev
2024-11-26T15:15:36
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2398). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,398
HuggingFaceDocBuilderDev
2024-11-26T13:44:04
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2397). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,397
qgallouedec
2024-11-26T10:35:31
This script has been renamed [`sft_vlm.py`](https://github.com/huggingface/trl/blob/main/examples/scripts/sft_vlm.py) in https://github.com/huggingface/trl/pull/2120
2,396
soumyasj
2024-11-26T15:04:41
Thank you! Closing this issue!
2,396
HuggingFaceDocBuilderDev
2024-11-26T10:26:49
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2395). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,395
HuggingFaceDocBuilderDev
2024-11-25T17:24:49
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2394). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,394
HuggingFaceDocBuilderDev
2024-11-25T15:45:42
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2393). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,393
HuggingFaceDocBuilderDev
2024-11-25T14:28:21
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2392). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,392
qgallouedec
2024-11-25T09:55:22
You don't need to process the data. The trainer does it for you. The `SFTTrainer` expects a dataset with a column named `"text"` (or `"messages"` for conversational data). Use [trl-internal-testing/zen](https://huggingface.co/datasets/trl-internal-testing/zen) as an example. Example for conversational data: ```python from trl import SFTConfig, SFTTrainer from datasets import load_dataset dataset = load_dataset("trl-internal-testing/zen", "conversational_language_modeling", split="train") training_args = SFTConfig(output_dir="Llama-3.2-1B-Instruct-SFT") trainer = SFTTrainer( args=training_args, model="meta-llama/Llama-3.2-1B-Instruct", train_dataset=dataset, ) trainer.train() ```
2,390
Humauaca
2024-11-26T17:38:35
So if I loaded the model and tokenizer with class functions AutoModelForCausalLM.from_pretrained and AutoTokenizer.from_pretrained, respectively, I should pass the tokenizer to the SFTTrainer instance as a argument, right?
2,390
qgallouedec
2024-11-26T18:04:29
Yes
2,390
Alex-Mathai-98
2024-12-24T17:24:35
Hi @qgallouedec - just a simple follow-up question. In the conversation format, does the SFT Trainer mask out the loss for the instructions? Or does it compute the loss for both the instructions and the responses? No one seems to know the answer to this online.
2,390
qgallouedec
2024-12-24T17:38:01
Not by default, you need to use `DataCollatorForCompletionOnly`. See https://huggingface.co/docs/trl/sft_trainer#train-on-completions-only
2,390
Alex-Mathai-98
2024-12-24T19:30:17
I see, @qgallouedec. Thank you so much for your quick response :bow: . I just want to make sure I understand your response perfectly. I was talking about https://huggingface.co/docs/trl/sft_trainer#dataset-format-support - so even if we format our data in this **conversations** format, HF will tokenize the entire conversation as a sequence of tokens and then perform **simple language modeling** on the entire text. The link https://huggingface.co/docs/trl/sft_trainer#train-on-completions-only you mentioned in your response seems to be used for a use case that is **NOT** conversations right? In a multi-turn conversation, there will be **multiple** answers - unlike the example mentioned in https://huggingface.co/docs/trl/sft_trainer#train-on-completions-only . In this case, the masking needs to be done for all intermediate instruction text right? For example, a two-turn conversation - {user, assistant, user, assitant} would have {Mask On, Mask Off, Mask On Mask Off}. I guess I will need to do this with a custom data collator?
2,390
HuggingFaceDocBuilderDev
2024-11-24T15:17:22
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2389). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,389
coder109
2024-11-27T03:25:48
It is OK if you parse the parameters **after** loading DDPOTrainer(). But I'd like to know what causes these two unrelated functions to affect each other.
2,388
coder109
2024-11-27T07:08:02
I PROBABLY know the cause. If you have encountered with the same problem, please modify the source code of `DDPOTrainer()` like this: ```python self.accelerator = Accelerator( log_with=self.config.log_with, mixed_precision=self.config.mixed_precision, project_config=accelerator_project_config, # we always accumulate gradients across timesteps; we want config.train.gradient_accumulation_steps to be the # number of *samples* we accumulate across, so we need to multiply by the number of training timesteps to get # the total number of optimizer steps to accumulate across. gradient_accumulation_steps=self.config.train_gradient_accumulation_steps * self.num_train_timesteps, **self.config.accelerator_kwargs, ) # Accelerate MOD BEGIN self.accelerator.state.deepspeed_plugin.deepspeed_config['train_micro_batch_size_per_gpu'] = 4 # Accelerate MOD END ``` It seems that `DDPOTrainer()` cannot properly load DeepSpeed config from external json files.
2,388
qgallouedec
2024-11-26T09:50:17
Thanks for reporting it. Would you like to open a PR to fix it?
2,387
HuggingFaceDocBuilderDev
2024-11-22T18:41:28
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2386). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,386
qgallouedec
2024-11-22T18:21:38
Thanks for reporting. It likely comes from the chat template. Can you share it?
2,385
qgallouedec
2024-11-22T18:24:26
The further explain the error, we expect a chat template that verifies ```python formatted_prompt = tokenizer.apply_chat_template(prompt, add_generation_prompt=True, tokenize=False) formatted_prompt_completion = tokenizer.apply_chat_template(prompt + completion, tokenize=False) assert formatted_prompt_completion.startswith(formatted_prompt) ``` Example with Qwen: ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-0.5B-Instruct") >>> prompt = [{"role": "user", "content": "Where is Paris?"}] >>> completion = [{"role": "assistant", "content": "In France."}] >>> formatted_prompt = tokenizer.apply_chat_template(prompt, add_generation_prompt=True, tokenize=False) >>> formatted_prompt_completion = tokenizer.apply_chat_template(prompt + completion, tokenize=False) >>> formatted_prompt '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\nWhere is Paris?<|im_end|>\n<|im_start|>assistant\n' >>> formatted_prompt_completion '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\nWhere is Paris?<|im_end|>\n<|im_start|>assistant\nIn France.<|im_end|>\n<|im_start|>assistant\n' >>> formatted_prompt_completion.startswith(formatted_prompt) True ```
2,385
qgallouedec
2024-11-22T18:34:03
It may come from here in you example: ```diff ds = ds.map( lambda x: { "system": [{"role": "user", "content": x["system"]}], "prompt": [{"role": "user", "content": x["prompt"]}], "chosen": [{"role": "assistant", "content": x["chosen"]}], - "rejected": [{"role": "user", "content": x["rejected"]}], + "rejected": [{"role": "assistant", "content": x["rejected"]}], } ) ```
2,385
MohamedAliRashad
2024-11-22T18:47:59
@qgallouedec I am the stupidest person on earth Thanks a lot
2,385
HuggingFaceDocBuilderDev
2024-11-22T18:09:42
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2384). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
2,384
qgallouedec
2024-11-24T16:07:10
Hi! Thanks for the suggestion. It could be a great addition. I haven't read the paper in detail yet but what you describe sounds closer to KTO than DPO, doesn't it? Do you have an implementation that already works?
2,383
AML14
2024-11-22T12:55:31
Update: DPO doesn't even work with a code completion task (i.e., neither the input nor output include FIM special tokens) with the base model. As an example, here is the output generated by `Qwen/Qwen2.5-Coder-0.5B` for the following input: ```java // Input: protected RouteBuilder createRouteBuilder()throws Exception { return new RouteBuilder() { // Output: @Override public void configure() throws Exception { from("direct:hello") .to("mock:hello"); } }; }<|endoftext|> ``` And here is the output of the same model after having applied DPO with about 3000 instances, where the prompt is the input and the chosen/rejected are correct/wrong completions: ```java // Input: protected RouteBuilder createRouteBuilder()throws Exception { return new RouteBuilder() { // Output: public void configure() throws Exception { <|fim_middle|> <|fim_middle|> <|fim_middle|><|endoftext|> ``` The model is completely broken after applying DPO.
2,382
yiyepiaoling0715
2024-11-23T10:18:06
> And here is the output of the same model after having applied DPO with about 3000 instances, where the prompt is the input and the chosen/rejected are correct/wrong completions: why not work with code completion task? I also do the code completion task with rl. i get some benefit,maybe not work under your situation is because of your train corpus
2,382
qgallouedec
2024-11-23T16:16:27
Is your dataset public? How does the training curves look like?
2,382
qgallouedec
2024-11-23T16:22:27
Can you confirm that your effective batch size is 8?
2,382
kashif
2024-11-25T12:09:56
@AML14 can you do a quick experiment where you remove the `EOS` token from the `chosen` and `rejected` keys as the `DPOTrainer by default adds those to the ends of the chosen and rejected input_ids
2,382
kashif
2024-11-26T13:06:07
@AML14 so I tried with your data and `beta=0.9` and the run is here: https://wandb.ai/krasul/huggingface/runs/wkekg3nb?nw=nwuserkrasul the outputs also look fine to to: ``` >>> input_text = """<|fim_prefix|>def quicksort(arr): ... if len(arr) <= 1: ... return arr ... pivot = arr[len(arr) // 2] ... <|fim_suffix|> ... middle = [x for x in arr if x == pivot] ... right = [x for x in arr if x > pivot] ... return quicksort(left) + middle + quicksort(right)<|fim_middle|>""" >>> model_inputs = TOKENIZER([input_text], return_tensors="pt").to(device) >>> generated_ids = MODEL.generate(model_inputs.input_ids, max_new_tokens=512, do_sample=False)[0] The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results. Setting `pad_token_id` to `eos_token_id`:None for open-end generation. >>> output_text = TOKENIZER.decode(generated_ids[len(model_inputs.input_ids[0]):], skip_special_tokens=True) >>> print(f"Prompt: {input_text}\n\nGenerated text: {output_text}") Prompt: <|fim_prefix|>def quicksort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] <|fim_suffix|> middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quicksort(left) + middle + quicksort(right)<|fim_middle|> Generated text: left = [x for x in arr if x < pivot] ```
2,382