File size: 1,986 Bytes
0694906
57fedf2
 
 
 
 
 
 
 
 
 
 
0694906
 
57fedf2
 
44822ae
57fedf2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44822ae
57fedf2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44822ae
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
license: apache-2.0
base_model: Jacaranda-Health/ASR-STT
tags:
- speech-to-text
- automatic-speech-recognition
- quantized
- 4bit
language:
- en
- sw
pipeline_tag: automatic-speech-recognition
---

# ASR-STT 4BIT Quantized

This is a 4-bit quantized version of [Jacaranda-Health/ASR-STT](https://huggingface.co/Jacaranda-Health/ASR-STT).

## Model Details
- **Base Model**: Jacaranda-Health/ASR-STT
- **Quantization**: 4bit
- **Size Reduction**: 84.6% smaller than original
- **Original Size**: 2913.89 MB
- **Quantized Size**: 448.94 MB

## Usage

```python
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, BitsAndBytesConfig
import torch
import librosa

# Load processor
processor = AutoProcessor.from_pretrained("Jacaranda-Health/ASR-STT-4bit")

# Configure quantization
quantization_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_compute_dtype=torch.float16,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_use_double_quant=True
)

# Load quantized model
model = AutoModelForSpeechSeq2Seq.from_pretrained(
    "eolang/ASR-STT-4bit",
    quantization_config=quantization_config,
    device_map="auto"
)

# Transcription function
def transcribe(filepath):
    audio, sr = librosa.load(filepath, sr=16000)
    inputs = processor(audio, sampling_rate=sr, return_tensors="pt")
    
    # Convert to half precision for quantized models
    if torch.cuda.is_available():
        inputs = {k: v.cuda().half() for k, v in inputs.items()}
    else:
        inputs = {k: v.half() for k, v in inputs.items()}
    
    with torch.no_grad():
        generated_ids = model.generate(inputs["input_features"])
    
    return processor.batch_decode(generated_ids, skip_special_tokens=True)[0]

# Example usage
transcription = transcribe("path/to/audio.wav")
print(transcription)
```

## Performance
- Faster inference due to reduced precision
- Lower memory usage
- Maintained transcription quality

## Requirements
- transformers
- torch
- bitsandbytes
- librosa