How to inference it

#1
by Navanit-AI - opened

Hi @SaraAlthubaiti ,

how should I inference the model ?
As per you model card this is the way to inference

https://huggingface.co/SaraAlthubaiti/TinyOctopus#inference

from inference import transcribe

audio_path = "path/to/audio.wav"  # Replace with your actual audio file
output = transcribe(audio_path, task="asr")  # Options: "dialect", "asr", "translation"

print("Generated Text:", output)

but from where inference is coming should I download the repo in local and then run it how to do so ?

Yes, you should download the repo locally and runt it.

I tried that got into error due to below line in models> init py

from .salmonn import SALMONN

@SaraAlthubaiti Can you please look into this error

---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
Cell In[2], line 3
      1 import torch
      2 from transformers import WhisperFeatureExtractor
----> 3 from models.tinyoctopus import TINYOCTOPUS
      4 from utils import prepare_one_sample

File ~/SaraAlthubaiti/TinyOctopus/models/__init__.py:15
      1 # Copyright (2024) Tsinghua University, Bytedance Ltd. and/or its affiliates
      2 #
      3 # Licensed under the Apache License, Version 2.0 (the "License");
   (...)     12 # See the License for the specific language governing permissions and
     13 # limitations under the License.
---> 15 from .salmonn import SALMONN
     17 def load_model(config):
     18     return SALMONN.from_config(config)

ModuleNotFoundError: No module named 'models.salmonn'

Same error when doing infrence

---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
Cell In[3], line 1
----> 1 from inference import transcribe
      3 audio_path = "examples/4970-29093-0016.wav"  # Replace with your actual audio file
      4 output = transcribe(audio_path, task="asr")  # Options: "dialect", "asr", "translation"

File ~/SaraAlthubaiti/TinyOctopus/inference.py:3
      1 import torch
      2 from transformers import WhisperFeatureExtractor
----> 3 from models.tinyoctopus import TINYOCTOPUS
      4 from utils import prepare_one_sample
      6 # Load model

File ~/SaraAlthubaiti/TinyOctopus/models/__init__.py:15
      1 # Copyright (2024) Tsinghua University, Bytedance Ltd. and/or its affiliates
      2 #
      3 # Licensed under the Apache License, Version 2.0 (the "License");
   (...)     12 # See the License for the specific language governing permissions and
     13 # limitations under the License.
---> 15 from .salmonn import SALMONN
     17 def load_model(config):
     18     return SALMONN.from_config(config)

ModuleNotFoundError: No module named 'models.salmonn'

Sign up or log in to comment