chapter-llama / requirements.txt
lucas-ventura's picture
Update with dockerfile
9ed896f
raw
history blame contribute delete
572 Bytes
transformers>=4.45.2
ctranslate2>=4.4.0
faster-whisper>=1.0.3
ffmpeg-python==0.2.0
gradio==5.9.1
yt-dlp
json5
torch
torchaudio
more_itertools
zhconv
sentencepiece
pyannote.audio
torchmetrics
lightning
hydra-core==1.3.2
hydra-colorlog==1.2.0
llama-cookbook
wandb
rich
git+https://github.com/lucas-ventura/lutils.git
git+https://github.com/idriscnrs/idr_torch.git
pycocoevalcap
prettytable
hf_transfer
bitsandbytes
# Needed by ALMA-GPTQ
accelerate
auto-gptq
optimum
# Needed by ALMA-GGUL
ctransformers[cuda]
# Needed by load_in_4bit parameters in transformers
bitsandbytes