MachineLearningLM nielsr HF Staff commited on
Commit
aed8dc7
Β·
verified Β·
1 Parent(s): 68c6dca

Improve model card: Add pipeline tag, library name, and expand usage details (#2)

Browse files

- Improve model card: Add pipeline tag, library name, and expand usage details (445adf8dbbc1e88af30e16619896db16f93c7cf7)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +124 -6
README.md CHANGED
@@ -1,13 +1,16 @@
1
  ---
2
- MachineLearningML: Continued Pretraining Language Models on Millions of Synthetic Tabular Prediction Tasks Scales In-Context ML
3
- license: apache-2.0
4
  base_model:
5
  - Qwen/Qwen2.5-7B-Instruct
 
 
 
6
  ---
7
 
8
  # MachineLearningLM
9
 
10
- ## model summary
 
 
11
 
12
  Can LLMs learn from 1,000 in-context examples?
13
 
@@ -25,10 +28,10 @@ Introducing **MachineLearningLM** πŸ§ͺπŸ“Š β€” a model continuously pretrained o
25
 
26
  GitHub: https://github.com/HaoAreYuDong/MachineLearningLM
27
 
28
- ## evaluation and validation
29
 
30
  We have developed an automated evaluation framework β€” simply configure the parameters to easily perform validation and evaluation.
31
- **The code is now open-sourced at our GitHub.**
32
 
33
  **Quick Start**
34
 
@@ -39,7 +42,7 @@ python ./src/evaluation/model_pred/dl_model_pred.py \
39
  --output_dir ./demo_output.jsonl \
40
  --model_name MachineLearningLM/MachineLearningLM-7B-v1
41
  ```
42
- **pipeline**
43
  ```bash
44
  # modify the evaluate_parameters.sh file
45
  source evaluate_parameters.sh
@@ -68,3 +71,118 @@ https://huggingface.co/mradermacher/MachineLearningLM-7B-v1-GGUF
68
 
69
  For more usage details, please visit our GitHub.
70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
2
  base_model:
3
  - Qwen/Qwen2.5-7B-Instruct
4
+ license: apache-2.0
5
+ pipeline_tag: text-generation
6
+ library_name: transformers
7
  ---
8
 
9
  # MachineLearningLM
10
 
11
+ This repository contains the model presented in the paper [MachineLearningLM: Scaling Many-shot In-context Learning via Continued Pretraining](https://huggingface.co/papers/2509.06806).
12
+
13
+ ## Model Summary
14
 
15
  Can LLMs learn from 1,000 in-context examples?
16
 
 
28
 
29
  GitHub: https://github.com/HaoAreYuDong/MachineLearningLM
30
 
31
+ ## Evaluation and Validation
32
 
33
  We have developed an automated evaluation framework β€” simply configure the parameters to easily perform validation and evaluation.
34
+ **The code is now open-sourced at our [GitHub repository](https://github.com/HaoAreYuDong/MachineLearningLM).**
35
 
36
  **Quick Start**
37
 
 
42
  --output_dir ./demo_output.jsonl \
43
  --model_name MachineLearningLM/MachineLearningLM-7B-v1
44
  ```
45
+ **Pipeline**
46
  ```bash
47
  # modify the evaluate_parameters.sh file
48
  source evaluate_parameters.sh
 
71
 
72
  For more usage details, please visit our GitHub.
73
 
74
+ ## Tabicl Evaluation
75
+
76
+ **This part of the code needs to run in an environment with the tabicl and openpyxl libraries installed.**
77
+
78
+ The evaluation code for tabicl is placed separately in the `./src/evaluation/tabicl_evaluate.py` file. Use `./scripts/tabicl_evaluate.sh` to obtain the evaluation results for tabicl.
79
+
80
+ Use --datasets to specify the datasets to be evaluated, and --sample_sizes to indicate the number of shots.
81
+
82
+ If multiple datasets need to be evaluated, separate them with spaces. To evaluate all CSV files in the input folder, use **all**.
83
+
84
+ ## Prior_data
85
+
86
+ MachineLearningLM uses the code from tabicl to generate prior data.
87
+
88
+ Use `./scripts/generate_data.sh` to generate the prior data. It generates the corresponding .pt and .csv files, and normalizes the feature values in the CSV files to the range of 0–999, as we did in the paper.
89
+
90
+ ### Parameter Introduction(refer to the comments in the file `tabicl\src\tabicl\prior\dataset.py`οΌ‰
91
+
92
+ **Data Scale & Structure**
93
+
94
+ | Parameter | Type | Description |
95
+ | :------------- | :--- | :------------------------------------------------------ |
96
+ | `min_features` | int | Minimum number of features per dataset |
97
+ | `max_features` | int | Maximum number of features per dataset |
98
+ | `max_classes` | int | Maximum number of target classes |
99
+ | `min_seq_len` | int | Minimum samples per dataset. Uses `max_seq_len` if None |
100
+ | `max_seq_len` | int | Maximum samples per dataset (Not IncludeοΌ‰ |
101
+
102
+ **Batch Configuration**
103
+
104
+ | Parameter | Type | Description |
105
+ | :--------------------- | :--- | :----------------------------------------------------------- |
106
+ | `batch_size` | int | Total number of datasets to generate per batch |
107
+ | `batch_size_per_gp` | int | Number of datasets per group (shared characteristics) |
108
+ | `batch_size_per_subgp` | int | Number of datasets per subgroup (similar causal structures). Defaults to `batch_size_per_gp` if None |
109
+
110
+ **Sequence Length Control**
111
+
112
+ | Parameter | Type | Description |
113
+ | :--------------- | :--- | :----------------------------------------------------------- |
114
+ | `log_seq_len` | bool | Sample sequence length from log-uniform distribution if True |
115
+ | `seq_len_per_gp` | bool | Sample sequence length per group (enables variable-sized datasets) |
116
+ | `replay_small` | bool | Occasionally sample smaller sequences for model robustness |
117
+
118
+ **Train-Test Split**
119
+
120
+ | Parameter | Type | Description |
121
+ | :--------------- | :-------- | :----------------------------------------------------------- |
122
+ | `min_train_size` | int/float | Start position/ratio for train split (int: absolute, float: fractional) |
123
+ | `max_train_size` | int/float | End position/ratio for train split (int: absolute, float: fractional) |
124
+
125
+ **Generation Method**
126
+
127
+ | Parameter | Type | Description |
128
+ | :----------- | :--- | :----------------------------------------------------------- |
129
+ | `prior_type` | str | Prior type: 'mlp_scm', 'tree_scm', or 'mix_scm' (random selection) |
130
+ | `fixed_hp` | dict | Fixed structural configuration parameters |
131
+ | `sampled_hp` | dict | Parameters sampled during generation |
132
+
133
+ **Computation Settings**
134
+
135
+ | Parameter | Type | Description |
136
+ | :------------------------- | :--- | :------------------------------------------------ |
137
+ | `n_jobs` | int | Number of parallel jobs (-1 = use all processors) |
138
+ | `num_threads_per_generate` | int | Number of threads per generation job |
139
+ | `device` | str | Computation device ('cpu' or 'cuda') |
140
+
141
+ ## Train
142
+
143
+ MachineLearningLM uses the LLaMA-Factory framework for training.
144
+
145
+ #### Training Environment Configuration
146
+
147
+ ```bash
148
+ cd ./third_party/LLaMA-Factory
149
+ pip install -e ".[torch,metrics]" --no-build-isolation
150
+ pip install wandb
151
+ ```
152
+
153
+ Use `./scripts/train.sh` for training.
154
+
155
+ ## Project Structure
156
+
157
+ ```
158
+ MachineLearningLM/
159
+ β”œβ”€β”€src/
160
+ | β”œβ”€β”€evaluation/
161
+ β”‚ β”‚ β”œβ”€β”€ data_prep/ # Data preprocessing and chunking utilities
162
+ β”‚ β”‚ β”œβ”€β”€ prompt_gen/ # Prompt generation for deep learning models
163
+ β”‚ β”‚ β”œβ”€β”€ model_pred/ # Model inference (ML and DL prediction engines)
164
+ β”‚ β”‚ β”œβ”€β”€ result_proc/ # 5-layer evaluation architecture and metrics processing
165
+ β”‚ β”‚ β”œβ”€β”€ zero_summary/ # Result summarization and report generation
166
+ β”‚ β”‚ └── tabicl_evaluate.py
167
+ β”‚ └──prior_data
168
+ β”‚ └── pt_to_csv.py
169
+ β”œβ”€β”€ scripts/
170
+ β”‚ β”œβ”€β”€ single_process/ # Sequential execution shell scripts
171
+ β”‚ β”œβ”€β”€ multi_process/ # Parallel execution shell scripts (with _mp suffix)
172
+ β”‚ β”œβ”€β”€ evaluate_parameters.sh # Global parameter configuration
173
+ | β”œβ”€β”€ evaluate_pipeline.sh # automated pipeline
174
+ | β”œβ”€β”€ generate_data.sh
175
+ | β”œβ”€β”€ tabicl_evaluate.sh
176
+ | └── train.sh
177
+ β”œβ”€β”€ datahub_inputs/
178
+ β”‚ β”œβ”€β”€ data_demo/ # Demo datasets for testing
179
+ β”‚ └── data_raw/ # Raw input datasets
180
+ β”œβ”€β”€ third_party/
181
+ β”‚ β”œβ”€β”€ tabicl/
182
+ β”‚ └── LLaMA-Factory/
183
+ β”œβ”€β”€ requirements.txt # Python dependencies for Evaluation Framework
184
+ β”œβ”€β”€ README.md
185
+ β”œβ”€β”€ README_zh.md
186
+ β”œβ”€β”€ THIRD_PARTY_NOTICES.md
187
+ └── LICENSE
188
+ ```