user
stringlengths 3
28
| created_at
timestamp[us] | body
stringlengths 1
173k
| issue_number
int64 1
2.51k
|
---|---|---|---|
SwayamInSync | 2024-12-21T19:58:52 | This accounted with `SFTTrainer` if this is a general issue with `Trainer` from transformers, can be re-located there | 2,514 |
HuggingFaceDocBuilderDev | 2024-12-21T12:12:26 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2513). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,513 |
HuggingFaceDocBuilderDev | 2024-12-21T00:10:35 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2512). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,512 |
HuggingFaceDocBuilderDev | 2024-12-20T23:42:15 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2511). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,511 |
HuggingFaceDocBuilderDev | 2024-12-20T21:43:27 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2510). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,510 |
HuggingFaceDocBuilderDev | 2024-12-20T16:10:32 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2509). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,509 |
metric-space | 2024-12-21T21:30:33 | @aivolcano There is a notebook that is related to this. The updated notebook is here: https://github.com/huggingface/trl/blob/main/examples/notebooks/best_of_n.ipynb | 2,508 |
HuggingFaceDocBuilderDev | 2024-12-20T11:30:43 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2507). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,507 |
Mecoli1219 | 2024-12-20T06:46:11 | Wait for https://github.com/linkedin/Liger-Kernel/pull/492 | 2,506 |
metric-space | 2024-12-21T21:33:11 | @nguyenhoa-uit I can help out with this as this was code I wrote more than a year ago. Mind you, I'll be very very slow. Let me take a look | 2,505 |
metric-space | 2024-12-23T09:46:31 | @nguyenhoa-uit could you try this bit : https://github.com/huggingface/trl/blob/main/trl/trainer/ddpo_config.py#L64 ? | 2,505 |
ggbetz | 2024-12-20T15:19:13 | It seems @philschmid has in implementation here: https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/391f19ba06c128a2a290b3bdcb717ad6ff794fd7/training/scripts/run_sft.py#L54-L77 and the question is maybe just what's the best cleanest way to integrate this natively in trl? | 2,504 |
anakin87 | 2024-12-21T16:25:29 | This would be great and would prevent users from making mistakes in the manual implementation of this method: for example, [the code for integration with other libraries reported in the official repo](https://github.com/cognitivecomputations/spectrum?tab=readme-ov-file) has some problems. In contrast, the simple implementation in [my tutorial](https://huggingface.co/blog/anakin87/spectrum) and Philipp's code should be correct.
BTW, Spectrum is quite agnostic with respect to training method (SFT, DPO...): the [models by VAGO solutions](https://huggingface.co/VAGOsolutions) show that it works well for DPO too.
LMK what's the better way to proceed and help with this integration. | 2,504 |
HuggingFaceDocBuilderDev | 2024-12-19T10:50:45 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2503). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,503 |
HuggingFaceDocBuilderDev | 2024-12-19T10:13:19 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2502). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,502 |
qgallouedec | 2024-12-23T12:38:06 | Can you screenshot a result? | 2,501 |
HuggingFaceDocBuilderDev | 2024-12-23T12:41:22 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2501). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,501 |
yaricom | 2024-12-23T12:43:48 | Sure, here is s screenshot from my account at Comet.
<img width="2106" alt="Screenshot 2024-12-23 at 14 42 20" src="https://github.com/user-attachments/assets/69629fdb-77de-4a2d-b1d2-087889d96a4c" />
| 2,501 |
yaricom | 2024-12-23T12:45:02 | And this is a DataFrame encoded as CSV.
[game_log.csv](https://github.com/user-attachments/files/18229453/game_log.csv)
| 2,501 |
yaricom | 2024-12-23T13:08:10 | The script I was using to test DPO trainer integration.
```python
import os
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer
from trl import DPOConfig, DPOTrainer
os.environ["TOKENIZERS_PARALLELISM"] = "false"
def main():
output_dir = "models/minimal/dpo_my"
model_id = "trl-internal-testing/tiny-Qwen2ForCausalLM-2.5"
# model_id = "Qwen/Qwen2-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_id)
ref_model = AutoModelForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token = tokenizer.eos_token
training_args = DPOConfig(
output_dir=output_dir,
per_device_train_batch_size=2,
max_steps=1,
remove_unused_columns=False,
gradient_accumulation_steps=8,
precompute_ref_log_probs=False,
learning_rate=5.0e-7,
eval_strategy="steps",
eval_steps=1,
report_to="all",
generate_during_eval=True,
max_length=1024,
)
# dummy_dataset = load_dataset("trl-internal-testing/zen", "standard_preference")
dummy_dataset = load_dataset("trl-lib/ultrafeedback_binarized", "default")
dummy_dataset["train"] = dummy_dataset["train"].select(range(20))
dummy_dataset["test"] = dummy_dataset["test"].select(range(40))
trainer = DPOTrainer(
model=model,
ref_model=ref_model,
args=training_args,
processing_class=tokenizer,
train_dataset=dummy_dataset["train"],
eval_dataset=dummy_dataset["test"],
)
trainer.train()
trainer.evaluate()
if __name__ == "__main__":
main()
```
Do not forget to set `COMET_APY_KEY` environment variable while executing it.
| 2,501 |
asparius | 2024-12-18T13:50:40 | trl uses accelerate which supports FSDP. However there is no recommeded config of FSDP in the repo unlike DeepSpeed, so you could refer to this [page](https://huggingface.co/docs/accelerate/en/usage_guides/fsdp) for FSDP. All in all, DPO and trl supports FSDP but not for online algo like PPO #1726. | 2,500 |
yingtongxiong | 2024-12-19T05:57:01 | > trl uses accelerate which supports FSDP. However there is no recommeded config of FSDP in the repo unlike DeepSpeed, so you could refer to this [page](https://huggingface.co/docs/accelerate/en/usage_guides/fsdp) for FSDP. All in all, DPO and trl supports FSDP but not for online algo like PPO #1726.
@asparius Thank you very much | 2,500 |
HuggingFaceDocBuilderDev | 2024-12-17T23:16:28 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2499). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,499 |
HuggingFaceDocBuilderDev | 2024-12-17T19:12:30 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2498). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,498 |
qgallouedec | 2024-12-17T22:30:39 | Yeah! thanks @sergiopaniego 🤘 | 2,498 |
asparius | 2024-12-18T14:14:35 | This has been noted previously #2281. I believe this was introduced in PPOv2 which was replication of the openai tldr paper which also contains this INVALID_LOGPROB=1.0 which does not break training because it cancels out at kl reward. Perhaps @vwxyzjn can tell why this was used, instead of masked_mean version | 2,496 |
Mecoli1219 | 2024-12-20T05:30:02 | Hi, I want to check that SimPO is in CPO instead of DPO, right? | 2,495 |
qgallouedec | 2024-12-20T11:01:35 | > Hi, I want to check that SimPO is in CPO instead of DPO, right?
Correct! Message modified | 2,495 |
HuggingFaceDocBuilderDev | 2024-12-17T08:16:37 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2494). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,494 |
qgallouedec | 2024-12-17T11:19:51 | Probably simpler:
```python
from huggingface_hub import ModelCard
model_card = ModelCard("""
---
tags: [trl]
---
# Some title
""")
if script_args.push_to_hub:
model_card.push_to_hub(script_args.repo_id, repo_type="dataset")
```
| 2,491 |
August-murr | 2024-12-17T12:15:50 | Well, that's one way to overengineer it
I also opened [issue on datasets](https://github.com/huggingface/datasets/issues/7336) to clarify.
I assume the next step is to add this to all the dataset scripts. | 2,491 |
qgallouedec | 2024-12-17T13:14:11 | Very good like this | 2,491 |
qgallouedec | 2024-12-16T12:14:28 | Thanks for reporting, please provide a *minimal* code/steps to reproduce this. | 2,490 |
sagie-dekel | 2024-12-16T12:48:53 | pipeline.zip (edit by maintainer: remove link)
thanks @qgallouedec
The attached files constitute a pipeline that using the DPOTrainer with DeepSpeed.
I am sorry that its ain't minimal but i don't see easy way to reproduce. if you prefer I can write the main steps. | 2,490 |
qgallouedec | 2024-12-16T13:52:12 | Sorry but we don't use zip files. The easy way to provide a MRE is to go line by line, if the error remains when you remove it, then you can discard the line. When there is no line left to remove, you have your MRE | 2,490 |
sagie-dekel | 2024-12-16T16:16:47 | Sorry @qgallouedec , here a minimal code of my pipeline:
```python
import argparse
import json
import math
import pandas as pd
import torch
from accelerate import Accelerator
from datasets import Dataset
from torch import optim, nn
from torch.optim.lr_scheduler import LambdaLR
from tqdm import tqdm
from transformers import AutoTokenizer, AutoModelForMaskedLM, AutoModel, AutoModelForCausalLM
from trl import PPOConfig, PPOTrainer, AutoModelForCausalLMWithValueHead, DPOConfig, DPOTrainer
from huggingface_hub import login
import os
import itertools
model_RLRF = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B-Instruct", torch_dtype=torch.float32)
tokenizer_RLRF = AutoTokenizer.from_pretrained(model_RLRF_name_or_path)
tokenizer_RLRF.add_special_tokens({'pad_token': tokenizer_RLRF.eos_token})
tokenizer_RLRF.padding_side = 'left'
DPO_config = DPOConfig(
report_to='tensorboard',
logging_first_step=True,
"per_device_train_batch_size": 4,
"gradient_accumulation_steps": 1,
"sync_ref_model": true,
"ref_model_mixup_alpha": 0.6,
"ref_model_sync_steps": 256,
"bf16": True,
)
# Create reference model:
parameter_names = [n for n, _ in model_RLRF.named_parameters()]
ref_model = deepcopy(model)
# if no layers are shared, return copy of model
for param_name in parameter_names:
param = ref_model.get_parameter(param_name)
param.requires_grad = False
ref_model.eval()
# Set optimizer for RLRF
optimizer_RLRF = optim.AdamW(filter(lambda param: param.requires_grad, model_RLRF.parameters()),
lr=1.41e-5)
train_dataset= pd.read_csv("perfernces_dataset_from_ranker_train_queries_and_baseline_doc.csv")
train_dataset= Dataset.from_pandas(train_dataset_RLRF)
dpo_trainer = DPOTrainer(model=model, args=DPO_config, processing_class=tokenizer_RLRF, ref_model=ref_model,
optimizers=(optimizer, None), train_dataset=train_dataset)
dpo_trainer.train()
```
the loaded data file (train_dataset) is:
[perfernces_dataset_from_ranker_train_queries_and_baseline_doc.csv](https://github.com/user-attachments/files/18153383/perfernces_dataset_from_ranker_train_queries_and_baseline_doc.csv) | 2,490 |
qgallouedec | 2024-12-16T11:04:09 | Good point, given that for other trainers (like DPO), it's a truncation.
In fact, the best thing would be to have a common behavior for all trainers (truncation), but the urgent thing is to clarify the documentation. | 2,488 |
HuggingFaceDocBuilderDev | 2024-12-16T09:16:23 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2487). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,487 |
Ciao-CA | 2024-12-20T07:32:59 | I have the same problem | 2,486 |
HuggingFaceDocBuilderDev | 2024-12-15T19:39:31 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2485). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,485 |
HuggingFaceDocBuilderDev | 2024-12-15T18:22:29 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2484). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,484 |
HuggingFaceDocBuilderDev | 2024-12-15T16:35:22 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2483). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,483 |
HuggingFaceDocBuilderDev | 2024-12-15T12:58:48 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2482). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,482 |
qgallouedec | 2024-12-15T15:34:21 | 2 questions/remarks:
- can you run benchmark so that we can (1) quantify the improvement and (2) check that results with and without liger are the same
- we could have an additional tag for the hub when a model is trained with liger
| 2,482 |
qgallouedec | 2024-12-15T15:48:46 | I think we should bump liger version to v0.5 (it doesn't include the loss before), see https://github.com/linkedin/Liger-Kernel/releases/tag/v0.5.0 | 2,482 |
kashif | 2024-12-18T10:46:56 | waiting on https://github.com/linkedin/Liger-Kernel/pull/486 | 2,482 |
kashif | 2024-12-19T10:09:55 | waiting on https://github.com/huggingface/trl/pull/2502
| 2,482 |
qgallouedec | 2024-12-19T10:33:44 | @kashif can you share the curves once it's ready? | 2,482 |
kashif | 2024-12-15T09:31:55 | @hteague-qti so I wanted to get it working with this collator and then come back and make it more general after that.. so would you have a suggestion on what the next generalization could be? make it work for the SFT default collator?
| 2,481 |
hteague-qti | 2024-12-16T19:28:17 | I was thinking it could be made completely independent of the collator. First thing might be to warn users that even though they are providing a collator in the args, you are switching to a different one (for now).
Seems to me that trainer should not care about the data preprocessing or the collator, just the output logits, etc. Making it work with default collator in SFT would be fine. This one is quite common for language: DataCollatorForCompletionOnlyLM | 2,481 |
hteague-qti | 2024-12-19T21:39:17 | btw, appreciate the response. | 2,481 |
HuggingFaceDocBuilderDev | 2024-12-14T21:45:10 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2480). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,480 |
August-murr | 2024-12-14T18:57:56 | Before adding it to all the trainers, what do you think of the overall structure? Is it okay to include the tools in each trainer configuration? | 2,479 |
qgallouedec | 2024-12-14T19:05:11 | Thanks for this addition!
Let's keep things as separate as possible, and keep this PR for DPO only.
The code as is looks good to me. The only question is: can this type (`Optional[list[Union[dict, Callable]]]`) being parsed. I'll try.
| 2,479 |
qgallouedec | 2024-12-14T19:27:17 | That's why I thought:
```python
from trl import DPOConfig, TrlParser
parser = TrlParser((DPOConfig,))
parser.parse_args_and_config()
```
```
$ python 2479.py --output_dir out --tools "{'type': 'function', 'function': {'name': 'multiply', 'description': 'A function that multiplies two numbers', 'parameters': {'type': 'object', 'properties': {'a': {'type': 'number', 'description': 'The first number to multiply'}, 'b': {'type': 'number', 'description': 'The second number to multiply'}}, 'required': ['a', 'b']}}}"
[...]
2479.py: error: argument --tools: invalid Union value: "{'type': 'function', 'function': {'name': 'multiply', 'description': 'A function that multiplies two numbers', 'parameters': {'type': 'object', 'properties': {'a': {'type': 'number', 'description': 'The first number to multiply'}, 'b': {'type': 'number', 'description': 'The second number to multiply'}}, 'required': ['a', 'b']}}}"
```
I'm not sure what the best way to handle it right now, I'll sleep on it.
| 2,479 |
August-murr | 2024-12-15T08:51:45 | > Let's keep things as separate as possible, and keep this PR for DPO only.
a different PR for each trainer then?
> can this type `(Optional[list[Union[dict, Callable]]])` being parsed.
Adding tools to the CLI would be quite complicated. It wouldn't be practical to add all the tools into the CLI. My best guess is to read the functions from another source, like another script, if there’s a request for it later. | 2,479 |
August-murr | 2024-12-16T08:22:54 | does this need anything else? test or docs? | 2,479 |
HuggingFaceDocBuilderDev | 2024-12-13T20:46:51 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2476). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,476 |
HuggingFaceDocBuilderDev | 2024-12-13T19:02:02 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2475). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,475 |
HuggingFaceDocBuilderDev | 2024-12-13T17:43:29 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2474). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,474 |
asparius | 2024-12-14T00:28:52 | It utilizes `self.model`, which is defined in [[this line](https://github.com/huggingface/trl/blob/6d4ed070f1f53a87fb3cff2eb82a56db093bccc6/trl/trainer/rloo_trainer.py#L162)](https://github.com/huggingface/trl/blob/6d4ed070f1f53a87fb3cff2eb82a56db093bccc6/trl/trainer/rloo_trainer.py#L162). This approach is also adopted in `PPOTrainer`. I believe this is a deliberate nomenclature choice, designed to remain consistent across various preference learning frameworks without introducing the complexity of aligning with the diverse terminologies used in academic papers. | 2,472 |
qgallouedec | 2024-12-13T16:33:05 | Yes, that's a good point!
All datasets in [hf.co/trl-lib](https://huggingface.co/trl-lib) are taken from an original dataset. We should at least indicate this dataset in the readme with something like:
```
This dataset is a processed version of [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) with this [script](https://github.com/huggingface/trl/blob/main/examples/datasets/ultrafeedback.py).
```
To do this, we should add to all script in https://github.com/huggingface/trl/blob/main/examples/datasets a model card that we push, like in https://github.com/huggingface/trl/blob/179ba5367181d9bd4bdaec70d50789b09754d04a/scripts/generate_tiny_models.py#L69-L97
We could also add the type/format of dataset with a link to the relevant section in this page of the documentation: https://huggingface.co/docs/trl/en/dataset_formats | 2,470 |
qgallouedec | 2024-12-13T16:44:51 | What you're describing sounds closer to _padding-free_ than packing. We have a (currently draft) PR for this: #2437.
Can you confirm that's it is what you're describing?
---
At this point I'm not even sure that packing for DPO makes sense. How to ensure that you've as many chosen than rejected? How to ensure they match? How to handle partial sequences? | 2,469 |
zhc7 | 2024-12-13T17:16:15 | Hi, thank you for your response. I looked into the link you provided. I think we are talking about the same thing. I used the word "packing" from https://huggingface.co/blog/packing-with-FA2. The "packing" here actually means concatenating a fixed batch size of samples into one sequence, and use `position_ids` to mark the boundaries, rather than packing to a fixed length. So there won't be the problems you mentioned. I've also briefly read https://huggingface.co/blog/mayank-mishra/padding-free-transformer this blog, I think the ideas are the same. But I'm not sure how the latter is implemented. Maybe they are the same thing just with different names:)
I breifly went through the pr, I see it is trying to add `position_ids` in the whole process, so I guess we are talking about the same thing. | 2,469 |
qgallouedec | 2024-12-13T16:51:33 | That's a good point! Feel free to open a PR to fix this. I don't think adding a unittest for this is relevant. If possible, add plots (eg, with wandb) before/after to ensure that we aren't introducing a regression | 2,468 |
zhc7 | 2024-12-13T17:17:59 | Ofcourse!
![image](https://github.com/user-attachments/assets/2da93fdf-a29d-41a1-974a-2b640e3a6ee6)
here's a graph for the same training with and without the modification. You can see the pink line is a lot more smoother. Especially the accuracy graph. My `per_device_batch_size` is 2 so the accuracy per device can only be 1, 0.5 or 0. | 2,468 |
qgallouedec | 2024-12-13T17:34:35 | Perfect! | 2,468 |
HuggingFaceDocBuilderDev | 2024-12-12T14:04:44 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2467). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,467 |
qgallouedec | 2024-12-13T17:52:47 | That's very interesting! It would be a nice improvement.
If you want to tackle this problem, you should be aware that packing will be implemented differently (in a simpler way) in the near future, see #2405. You should branch from there. | 2,466 |
qgallouedec | 2024-12-12T10:44:50 | First, SAC is designed for continuous action spaces, whereas NLP tasks involve discrete token outputs. (A discrete variant of SAC exists though.)
Second, SAC lacks a mechanism to constrain the policy from deviating too far from the initial model. PPO, on the other hand, explicitly limits policy updates, which is crucial in RLHF to maintain alignment and preserve the pretrained model’s capabilities. Without such constraints, SAC could result in a policy that either drifts excessively or remains overly similar to the original model.
Finally, SAC's entropy maximization encourages broad exploration, which may be counterproductive in RLHF. Finetuning typically involves domain-specific data, and excessive exploration could lead to unaligned or undesirable behaviors. This mechanism might inadvertently encourage unlearning of pretrained knowledge.
That said, these points are speculative and based on intuition. I'd be eager to see papers or results that either confirm or challenge these hypotheses.
| 2,465 |
AMindToThink | 2024-12-13T21:11:39 | Thank you for the response.
The [RLOO trainer](https://arxiv.org/pdf/2402.14740) also lacks PPO's clipping mechanism that constrains the policy from deviating too far from the previous policy. [It turns out](https://huggingface.co/blog/putting_rl_back_in_rlhf_with_rloo) that for RLHF on pretrained language models, that clipping step is not necessary.
If you are referring to the reference policy, I don't see why a KL divergence term with a reference policy cannot be included into the SAC loss function.
Mode collapse and loss of variety is a common problem for aligned models, so if SAC makes a different tradeoff, encouraging exploration, then that could be useful. | 2,465 |
qgallouedec | 2024-12-13T22:13:21 | > lacks PPO's clipping mechanism that constrains the policy from deviating too far from the previous policy
There is a KL term though
> I don't see why a KL divergence term with a reference policy cannot be included into the SAC loss function.
I guess you can, it's just that in its classic formulation, the SAC objective doesn't contain such a term.
> encouraging exploration, then that could be useful
I'm not so sure about that. But if you manage to produce or find results that help to see more clearly on this matter, please share them.
| 2,465 |
HuggingFaceDocBuilderDev | 2024-12-11T20:08:30 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2463). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,463 |
qgallouedec | 2024-12-12T11:25:31 | Thanks @kashif! | 2,463 |
qgallouedec | 2024-12-11T15:50:18 | Looks good, feel free to mark it ready for review when it's ready :) | 2,462 |
yaricom | 2024-12-12T17:24:11 | @qgallouedec Hi, Quentin! I can see that there are some trainer implementations that do logging of tabular data as `wandb.Table` using `Trainer.log()` method rather than using corresponding method of WandB API.
For example:
```Python
class DPOTrainer(Trainer):
......
def evaluation_loop(...)
.....
self.log(
{
"game_log": wandb.Table(
columns=["Prompt", "Policy", "Ref Model"],
rows=[
[prompt, pol[len(prompt) :], ref[len(prompt) :]]
for prompt, pol, ref in zip(
random_batch["prompt"], policy_output_decoded, ref_output_decoded
)
],
)
}
)
```
We are not sure on the best way to update this part of code in order to support other integrations like Comet, for example.
What do you think if I change the mentioned code block to log table using WandB API instead of `Trainer.log()`?
Something like:
```Python
if "wandb" in self.args.report_to:
import wandb
if wandb.run is not None:
wandb.log({"game_log": wandb.Table(
columns=["Prompt", "Policy", "Ref Model"],
rows=[
[prompt, pol[len(prompt):], ref[len(prompt):]]
for prompt, pol, ref in zip(
random_batch["prompt"], policy_output_decoded, ref_output_decoded
)
],
)}
)
```
This will greatly simplify adding other integrations.
| 2,462 |
qgallouedec | 2024-12-12T20:50:16 | Hey thanks for working on this.
Actually, we need to remove all these logging part in favor of [LogCompletionsCallback](https://huggingface.co/docs/trl/callbacks#trl.LogCompletionsCallback)
The best way is probably to make this callback compatible with comet | 2,462 |
yaricom | 2024-12-13T14:37:42 | @qgallouedec Thank you for quick response. I noticed that `LogCompletionsCallback` is a subclass of `WandbCallback`, which requires the `wandb` module to be present; otherwise, an exception is raised.
It seems a bit out of place to leave this inheritance unchanged and simply add Comet integration to this callback. There could be situations where Comet is installed, but WandB is either not installed or not initialized (e.g., missing API key).
It is possible to change `LogCompletionsCallback` inheritance to use `TrainerCallback` as superclass and then implement both integrations: wandb and Comet.
What do you think? | 2,462 |
qgallouedec | 2024-12-13T17:38:35 | > It is possible to change `LogCompletionsCallback` inheritance to use `TrainerCallback` as superclass and then implement both integrations: wandb and Comet.
>
> What do you think?
Yes, I think your suggestion makes sense. Would you like to make it as part of this PR? | 2,462 |
yaricom | 2024-12-13T17:46:47 | I think it would be better to have another PR for `LogCompletionsCallback` changes to keep things more granular. | 2,462 |
qgallouedec | 2024-12-13T18:10:36 | LGTM, waiting https://github.com/huggingface/trl/pull/2462#discussion_r1884306215 to be addressed then I'll approve & merge. Thanks! | 2,462 |
HuggingFaceDocBuilderDev | 2024-12-13T18:14:15 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2462). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,462 |
qgallouedec | 2024-12-11T15:04:05 | ☄️ | 2,461 |
qgallouedec | 2024-12-11T14:22:57 | The dataset is not loaded in RAM (only the current batch).
The training should be rather light in terms of RAM, as the weights and gradient are on the GPU. You'll still need enough RAM to load the model though.
When I run the experiment, I see that it requires less than 2GB of RAM.
| 2,460 |
Kallinteris-Andreas | 2024-12-11T14:30:09 | What CPU do you have (exact model)
and does DRAM usage explode when you run `CUDA_VISIBLE_DEVICES="" python test.py`
by current best guess is that it is a BF16 related issue (as my r5 4600h does not natively support it, probably off though) | 2,460 |
qgallouedec | 2024-12-11T14:45:57 | ```
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 79
model name : Intel(R) Xeon(R) CPU @ 2.20GHz
stepping : 0
microcode : 0xffffffff
cpu MHz : 2199.998
cache size : 56320 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa mmio_stale_data retbleed bhi
bogomips : 4399.99
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 79
model name : Intel(R) Xeon(R) CPU @ 2.20GHz
stepping : 0
microcode : 0xffffffff
cpu MHz : 2199.998
cache size : 56320 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 1
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa mmio_stale_data retbleed bhi
bogomips : 4399.99
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
```
> and does DRAM usage explode when you run CUDA_VISIBLE_DEVICES="" python test.py
you mean, trying to train on CPU?
btw you may want to set `max_seq_length` in in the `SFTConfig` to limit the GPU memory usage.
> by current best guess is that it is a BF16 related issue (as my r5 4600h does not natively support it, probably off though)
BF16 if off by default yes | 2,460 |
Kallinteris-Andreas | 2024-12-11T14:47:30 | GPU does not work for me (works for my other RL projects)
```sh
$ PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True py test.py
[2024-12-11 16:34:26,123] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda (auto detect)
No ROCm runtime is found, using ROCM_HOME='/opt/rocm'
0%| | 0/3 [00:00<?, ?it/s]Traceback (most recent call last):
File "/home/master-andreas/job/trl-test/test.py", line 16, in <module>
trainer.train()
File "/home/master-andreas/job/trl-test/transformers-kalli/src/transformers/trainer.py", line 2164, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/transformers-kalli/src/transformers/trainer.py", line 2522, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/transformers-kalli/src/transformers/trainer.py", line 3667, in training_step
loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/transformers-kalli/src/transformers/trainer.py", line 3721, in compute_loss
outputs = model(**inputs)
^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/transformers-kalli/src/transformers/models/qwen2/modeling_qwen2.py", line 1140, in forward
outputs = self.model(
^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/transformers-kalli/src/transformers/models/qwen2/modeling_qwen2.py", line 870, in forward
layer_outputs = decoder_layer(
^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/transformers-kalli/src/transformers/models/qwen2/modeling_qwen2.py", line 613, in forward
hidden_states = self.mlp(hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/transformers-kalli/src/transformers/models/qwen2/modeling_qwen2.py", line 223, in forward
return self.down_proj(self.act_fn(self.gate_proj(hidden_state)) * self.up_proj(hidden_state))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/activation.py", line 432, in forward
return F.silu(input, inplace=self.inplace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/functional.py", line 2380, in silu
return torch._C._nn.silu(input)
^^^^^^^^^^^^^^^^^^^^^^^^
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 38.00 MiB. GPU 0 has a total capacity of 3.63 GiB of which 11.31 MiB is free. Including non-PyTorch memory, this process has 3.55 GiB memory in use. Of the allocated memory 3.47 GiB is allocated by PyTorch, and 8.16 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
0%| | 0/3 [00:00<?, ?it/s]
``` | 2,460 |
qgallouedec | 2024-12-11T14:59:11 | WDYM it doesn't work? It seems to work from the traceback.
I can see that your device only have 3.63 GiB, which is not enough to run the example. With `max_seq_length=128` you'll need around 12GB
![W B Chart 11_12_2024, 15_58_14](https://github.com/user-attachments/assets/63349049-b7f3-4c09-9229-b1c3f1914c90) | 2,460 |
Kallinteris-Andreas | 2024-12-11T15:27:51 | Here is the dram usage per value of `max_seq_lenght`
max_seq_length -> max RAM usage observed (rounded up)
4 -> 10GB DRAM
32 -> 9GB DRAM
128 -> 11GB DRAM
512 -> 18GB DRAM
1024 (default) -> 32GB+ DRAM
using `max_seq_length=128` seems to require 28 hours on my CPU which is an improvement from not running at all.
I am not sure what `max_seq_length` actually does, I am assuming it limits the context length used during fine-tuning the docstring mentions something about `ConstantLengthDataset` but I have not found it what it is.
| 2,460 |
qgallouedec | 2024-12-11T15:37:44 | > I am assuming it limits the context length used during fine-tuning
Yes, that's what it does.
> mentions something about ConstantLengthDataset but I have not found it what it is.
This is a special dataset setting where all data have the same length. Not relevant for this issue though | 2,460 |
Kallinteris-Andreas | 2024-12-12T01:29:36 | How much time does it take to run this simple example on your hardware?
```py
from trl import SFTConfig, SFTTrainer
from datasets import load_dataset
dataset = load_dataset("trl-lib/Capybara", split="train")
training_args = SFTConfig(output_dir="Qwen/Qwen2.5-0.5B-SFT", max_seq_length=128)
trainer = SFTTrainer(
args=training_args,
model="Qwen/Qwen2.5-0.5B",
train_dataset=dataset,
)
trainer.train()
``` | 2,460 |
Kallinteris-Andreas | 2024-12-16T10:23:40 | Closing, as it appears to be the natural requirement of SFT | 2,460 |
qgallouedec | 2024-12-13T22:16:30 | Thanks for this suggestion.
Can you quantify the speedup?
Any idea how to properly set the gradient checkpointing configurations?
Can we reproduce the speedup with a very simple code example?
| 2,459 |
qgallouedec | 2024-12-11T17:10:04 | Hey, thanks for contributing!
Is it really a loss type? It seems to me that it can be combined with any loss type, no?
What about having a new arg in `DPOConfig`? maybe `length_normalize`?
Also, I'd add a test for this | 2,458 |
HuggingFaceDocBuilderDev | 2024-12-10T16:14:50 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2457). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,457 |
qgallouedec | 2024-12-10T17:54:12 | This is an interesting finding! ~I suspect it's related to https://github.com/huggingface/trl/issues/2175~. I'm investigating. | 2,456 |
qgallouedec | 2024-12-10T18:54:04 | The issue arises from how the accelerator is configured in [`create_accelerator_and_postprocess`](https://github.com/huggingface/transformers/blob/91b8ab18b778ae9e2f8191866e018cd1dc7097be/src/transformers/trainer.py#L4990).
To set the number of gradient accumulation steps, users can either:
1. Specify `num_steps` in `AcceleratorConfig`, or
2. Use `TrainingArguments.gradient_accumulation_steps` when initializing the `transformers.Trainer`.
However, in both cases, the gradient norm (`grad_norm`) is computed using the accelerator [here](https://github.com/huggingface/transformers/blame/91b8ab18b778ae9e2f8191866e018cd1dc7097be/src/transformers/trainer.py#L2557). When using `TrainingArguments.gradient_accumulation_steps` to define the accumulation steps, the accelerator does not account for the specified value when calculating the gradient norm.
Adding a `gradient_accumulation_steps` argument to the `Accelerator` initialization [here](https://github.com/huggingface/transformers/blame/91b8ab18b778ae9e2f8191866e018cd1dc7097be/src/transformers/trainer.py#L5043) resolves the issue (as shown in the curves below). However, I'm pretty sure it's not what we want to do.
```diff
- self.accelerator = Accelerator(**args)
+ self.accelerator = Accelerator(**args, gradient_accumulation_steps=self.args.gradient_accumulation_steps)
```
@muellerzr, could you review this and share your thoughts?
--
`--gradient_accumulation_steps 8 --per_device_train_batch_size 4`
![Screenshot 2024-12-10 at 19 50 52](https://github.com/user-attachments/assets/9f22e505-6394-4561-9f09-1f7d2df196ed)
`--gradient_accumulation_steps 32 --per_device_train_batch_size 1`
![Screenshot 2024-12-10 at 19 51 11](https://github.com/user-attachments/assets/198999ea-892d-4797-be36-6f200e01f18c)
Before the fix : red/pink ; after the fix blues
![Screenshot 2024-12-10 at 20 01 10](https://github.com/user-attachments/assets/957677a7-bb81-4d5d-8947-7ab0daa1e6e1)
| 2,456 |
muellerzr | 2024-12-11T02:52:48 | Correct, that's not what we want to do because with the fix to how we calculate the number of items in the batch, the losses will not align and things will be off, so we *don't* divide the loss by accumulation steps if we know that value. I'd need to play with this a bit as I'm not 100% sure if we can just modify the grads for clipping without modifying the overall loss we just calculated :thinking: | 2,456 |
AIR-hl | 2024-12-11T03:10:34 | > The issue arises from how the accelerator is configured in [`create_accelerator_and_postprocess`](https://github.com/huggingface/transformers/blob/91b8ab18b778ae9e2f8191866e018cd1dc7097be/src/transformers/trainer.py#L4990).
@qgallouedec I have a new question that if the problem arises from [create_accelerator_and_postprocess](https://github.com/huggingface/transformers/blob/91b8ab18b778ae9e2f8191866e018cd1dc7097be/src/transformers/trainer.py#L4990) in `transformers.Trainer`, why `trl.SFTTrainer`'s behavior is normal, but `trl.DPOTrainer` isnt, they both inherit from `transformers.Trainer`
sft, `batch_size=4`, `accumulation=8`
![7cf799b818cdced95fc4632de02a8fba](https://github.com/user-attachments/assets/35e77e32-544a-4e25-90d9-a3b2ba2b8525)
sft, `batch_size=2`, `accumulation=16`
![1eba3468eab71db9185de3a1ab0120b9](https://github.com/user-attachments/assets/2eadda34-61e4-4cf4-ba63-153d23d7bcd1)
sft, `batch_size=1`, `accumulation=32`
![c6e2266b5eb3ff8736fe652a85124a41](https://github.com/user-attachments/assets/02a34f43-7b98-4ac1-b2c3-c33cf6cb66a0)
| 2,456 |
qgallouedec | 2024-12-11T10:21:00 | > @qgallouedec I have a new question that if the problem arises from [create_accelerator_and_postprocess](https://github.com/huggingface/transformers/blob/91b8ab18b778ae9e2f8191866e018cd1dc7097be/src/transformers/trainer.py#L4990) in `transformers.Trainer`, why `trl.SFTTrainer`'s behavior is normal, but `trl.DPOTrainer` isnt, they both inherit from `transformers.Trainer`
I can't explain it right now. Any idea?
| 2,456 |
End of preview. Expand
in Dataset Viewer.
Stars
import requests
from datetime import datetime
from datasets import Dataset
import pyarrow as pa
import os
def get_stargazers(owner, repo, token):
# Initialize the count and the page number
page = 1
stargazers = []
while True:
# Construct the URL for the stargazers with pagination
stargazers_url = f"https://api.github.com/repos/{owner}/{repo}/stargazers?page={page}&per_page=100"
# Send the request to GitHub API with appropriate headers
headers = {"Accept": "application/vnd.github.v3.star+json", "Authorization": "token " + token}
response = requests.get(stargazers_url, headers=headers)
if response.status_code != 200:
raise Exception(f"Failed to fetch stargazers with status code {response.status_code}: {response.text}")
stargazers_page = response.json()
if not stargazers_page: # Exit the loop if there are no more stargazers to process
break
stargazers.extend(stargazers_page)
page += 1 # Move to the next page
return stargazers
token = os.environ.get("GITHUB_PAT")
stargazers = get_stargazers("huggingface", "trl", token)
stargazers = {key: [stargazer[key] for stargazer in stargazers] for key in stargazers[0].keys()}
dataset = Dataset.from_dict(stargazers)
def clean(example):
starred_at = datetime.strptime(example["starred_at"], "%Y-%m-%dT%H:%M:%SZ")
starred_at = pa.scalar(starred_at, type=pa.timestamp("s", tz="UTC"))
return {"starred_at": starred_at, "user": example["user"]["login"]}
dataset = dataset.map(clean, remove_columns=dataset.column_names)
dataset.push_to_hub("qgallouedec/trl-metrics", config_name="stargazers")
Pypi downloads
from datasets import Dataset
from google.cloud import bigquery
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "propane-tree-432413-4c3e2b5e6b3c.json"
# Initialize a BigQuery client
client = bigquery.Client()
# Define your query
query = """
#standardSQL
WITH daily_downloads AS (
SELECT
DATE(timestamp) AS day,
COUNT(*) AS num_downloads
FROM
`bigquery-public-data.pypi.file_downloads`
WHERE
file.project = 'trl'
-- Filter for the last 12 months
AND DATE(timestamp) BETWEEN DATE_SUB(CURRENT_DATE(), INTERVAL 54 MONTH) AND CURRENT_DATE()
GROUP BY
day
)
SELECT
day,
num_downloads
FROM
daily_downloads
ORDER BY
day DESC
"""
# Execute the query
query_job = client.query(query)
# Fetch the results
results = query_job.result()
# Convert the results to a pandas DataFrame and then to a Dataset
df = results.to_dataframe()
dataset = Dataset.from_pandas(df)
dataset.push_to_hub("qgallouedec/trl-metrics", config_name="pypi_downloads")
Models tagged
from huggingface_hub import HfApi
from datasets import Dataset
api = HfApi()
models = api.list_models(tags="trl")
dataset_list = [{"id": model.id, "created_at": model.created_at, "likes": model.likes, "downloads": model.downloads, "tags": model.tags} for model in models]
dataset_dict = {key: [d[key] for d in dataset_list] for key in dataset_list[0].keys()}
dataset = Dataset.from_dict(dataset_dict)
dataset.push_to_hub("qgallouedec/trl-metrics", config_name="models")
Issues and comments
import requests
from datetime import datetime
import os
from datasets import Dataset
from tqdm import tqdm
token = os.environ.get("GITHUB_PAT")
def get_full_response(url, headers, params=None):
page = 1
output = []
params = params or {}
while True:
params = {**params, "page": page, "per_page": 100}
response = requests.get(url, headers=headers, params=params)
if response.status_code != 200:
raise Exception(f"Failed to fetch issues: {response.text}")
batch = response.json()
if len(batch) == 0:
break
output.extend(batch)
page += 1
return output
# GitHub API URL for issues (closed and open)
issues_url = f"https://api.github.com/repos/huggingface/trl/issues"
# Set up headers for authentication
headers = {"Authorization": f"token {token}", "Accept": "application/vnd.github.v3+json"}
# Make the request
issues = get_full_response(issues_url, headers, params={"state": "all"})
issues_dataset_dict = {
"number": [],
"title": [],
"user": [],
"state": [],
"created_at": [],
"closed_at": [],
"comments_count": [],
}
comments_dataset_dict = {
"user": [],
"created_at": [],
"body": [],
"issue_number": [],
}
for issue in tqdm(issues):
# Extract relevant information
issue_number = issue["number"]
title = issue["title"]
created_at = datetime.strptime(issue["created_at"], "%Y-%m-%dT%H:%M:%SZ")
comments_count = issue["comments"]
comments_url = issue["comments_url"]
comments = get_full_response(comments_url, headers=headers)
for comment in comments:
comments_dataset_dict["user"].append(comment["user"]["login"])
comments_dataset_dict["created_at"].append(datetime.strptime(comment["created_at"], "%Y-%m-%dT%H:%M:%SZ"))
comments_dataset_dict["body"].append(comment["body"])
comments_dataset_dict["issue_number"].append(issue_number)
issues_dataset_dict["number"].append(issue_number)
issues_dataset_dict["title"].append(title)
issues_dataset_dict["user"].append(issue["user"]["login"])
issues_dataset_dict["state"].append(issue["state"])
issues_dataset_dict["created_at"].append(datetime.strptime(issue["created_at"], "%Y-%m-%dT%H:%M:%SZ"))
issues_dataset_dict["closed_at"].append(datetime.strptime(issue["closed_at"], "%Y-%m-%dT%H:%M:%SZ") if issue["closed_at"] else None)
issues_dataset_dict["comments_count"].append(comments_count)
issues_dataset = Dataset.from_dict(issues_dataset_dict)
comments_dataset = Dataset.from_dict(comments_dataset_dict)
issues_dataset.push_to_hub("qgallouedec/trl-metrics", config_name="issues")
comments_dataset.push_to_hub("qgallouedec/trl-metrics", config_name="issue_comments")
- Downloads last month
- 334