Yingxu He commited on
Commit
edf339b
·
verified ·
1 Parent(s): efca8a5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -54,7 +54,7 @@ Here we provide a code snippet illustrating the process of loading both the proc
54
  from datasets import load_dataset
55
  from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
56
 
57
- repo_id = "MERaLiON/AudioLLM"
58
 
59
  processor = AutoProcessor.from_pretrained(
60
  repo_id,
@@ -95,7 +95,7 @@ MERaLiON-AudioLLM also supports batch inference.
95
  from datasets import load_dataset
96
  from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
97
 
98
- repo_id = "MERaLiON/AudioLLM"
99
 
100
  processor = AutoProcessor.from_pretrained(
101
  repo_id,
@@ -135,6 +135,7 @@ response = processor.batch_decode(generated_ids, skip_special_tokens=True)
135
 
136
  The current MERaLiON-AudioLLM has not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes.
137
 
 
138
 
139
  ## Technical Specifications
140
 
@@ -144,7 +145,7 @@ MERaLiON-AudioLLM is trained on a diverse collection of publicly available datas
144
 
145
  ### Compute and Infrastructure
146
 
147
- MERaLiON-AudioLLM is trained on the **ASPIRE 2A+** Supercomputer Cluster, provided by the **National Supercomputing Centre (NSCC)**. ASPIRE 2A+ cluster provides multiple H100 nodes, with each compute node equipped with 8 Nvidia H100 GPUs, 2 TB of RAM, and 30 TB of locally attached NVMe storage. These nodes are interconnected via a rail-optimised, full fat-tree topology, utilising 400 Gb/s NDR InfiniBand cables. Additionally, the cluster incorporates a 2.5 PB SSD-based Lustre file system, linked to the H100 nodes through high-speed InfiniBand connections.
148
 
149
  With a global batch size of 640, we train the current release of MERaLiON-AudioLLM for around 200k steps, which took 2 days to complete using 16 nodes, 128 H100 GPUs.
150
 
 
54
  from datasets import load_dataset
55
  from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
56
 
57
+ repo_id = "MERaLiON/MERaLiON-AudioLLM-Whisper-SEA-LION"
58
 
59
  processor = AutoProcessor.from_pretrained(
60
  repo_id,
 
95
  from datasets import load_dataset
96
  from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
97
 
98
+ repo_id = "MERaLiON/MERaLiON-AudioLLM-Whisper-SEA-LION"
99
 
100
  processor = AutoProcessor.from_pretrained(
101
  repo_id,
 
135
 
136
  The current MERaLiON-AudioLLM has not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes.
137
 
138
+ This research is supported by the National Research Foundation, Singapore and Infocomm Media Development Authority, Singapore under its National Large Language Models Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore and Infocomm Media Development Authority, Singapore.
139
 
140
  ## Technical Specifications
141
 
 
145
 
146
  ### Compute and Infrastructure
147
 
148
+ MERaLiON-AudioLLM is trained on the **ASPIRE 2A+** Supercomputer Cluster, provided by **National Supercomputing Centre (NSCC)**, Singapore. ASPIRE 2A+ cluster provides multiple H100 nodes, with each compute node equipped with 8 Nvidia H100 GPUs, 2 TB of RAM, and 30 TB of locally attached NVMe storage. These nodes are interconnected via a rail-optimised, full fat-tree topology, utilising 400 Gb/s NDR InfiniBand cables. Additionally, the cluster incorporates a 2.5 PB SSD-based Lustre file system, linked to the H100 nodes through high-speed InfiniBand connections.
149
 
150
  With a global batch size of 640, we train the current release of MERaLiON-AudioLLM for around 200k steps, which took 2 days to complete using 16 nodes, 128 H100 GPUs.
151