nazdridoy commited on
Commit
c2e6d7e
Β·
verified Β·
1 Parent(s): bc34cae

refactor(core): rebrand to AI-Inferoxy and update API endpoints

Browse files

- [api] Modify token provisioning endpoint to `/keys/provision/hf` (hf_token_utils.py:get_proxy_token():49)
- [api] Modify token status report endpoint to `/keys/report/hf` (hf_token_utils.py:report_token_status():156)
- [ui] Update Gradio application block title to "AI-Inferoxy AI Hub" (app.py:create_app():27)
- [ui] Change main header and descriptive text to reference "AI-Inferoxy" (ui_components.py:create_main_header():707,709)
- [docs] Update footer documentation link to "AI‑Inferoxy docs" (ui_components.py:create_footer():728)
- [docs] Replace "HF-Inferoxy" with "AI-Inferoxy" in README.md titles, headings, descriptions, and links (README.md:2,11,13,27,39,56,60,74)
- [refactor] Update docstring, proxy server comment, and error message (chat_handler.py:chat_respond():37,50,130)
- [refactor] Update docstring and connection error messages (hf_token_utils.py:get_proxy_token():19,76,83)
- [refactor] Update docstrings, proxy comments, and error messages (image_handler.py:generate_image():45,57,115, generate_image_to_image():169,181,239)
- [refactor] Update docstring, proxy comment, and error message (tts_handler.py:generate_text_to_speech():39,51,117)
- [refactor] Update docstring, proxy comment, and error message (video_handler.py:generate_video():35,47,100)
- [refactor] Replace "HF-Inferoxy" with "AI-Inferoxy" in comments, docstrings, and error messages (utils.py)

Files changed (9) hide show
  1. README.md +9 -9
  2. app.py +2 -2
  3. chat_handler.py +4 -4
  4. hf_token_utils.py +6 -6
  5. image_handler.py +7 -7
  6. tts_handler.py +4 -4
  7. ui_components.py +4 -4
  8. utils.py +1 -1
  9. video_handler.py +4 -4
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- title: HF-Inferoxy AI Hub
3
  emoji: πŸš€
4
  colorFrom: purple
5
  colorTo: blue
@@ -11,9 +11,9 @@ hf_oauth_authorized_org:
11
  - nazdev
12
  ---
13
 
14
- ## πŸš€ HF‑Inferoxy AI Hub
15
 
16
- A focused, multi‑modal AI workspace. Chat, create images, transform images, generate short videos, and synthesize speech β€” all routed through HF‑Inferoxy for secure, quota‑aware token management and provider failover.
17
 
18
  ### Highlights
19
  - Chat, Image, Image‑to‑Image, Video, and TTS in one app
@@ -23,7 +23,7 @@ A focused, multi‑modal AI workspace. Chat, create images, transform images, ge
23
 
24
  ### Quick Start (Hugging Face Space)
25
  Add Space secrets:
26
- - `PROXY_URL`: HF‑Inferoxy server URL (e.g., `https://proxy.example.com`)
27
  - `PROXY_KEY`: API key for your proxy
28
 
29
  Org access control: instead of a custom `ALLOWED_ORGS` secret and runtime checks, configure org restrictions in README metadata using `hf_oauth_authorized_org` per HF Spaces OAuth docs. Example:
@@ -38,7 +38,7 @@ hf_oauth_authorized_org:
38
  The app reads these at runtime β€” no extra setup required.
39
 
40
  ### How It Works
41
- 1. The app requests a valid token from HF‑Inferoxy for each call.
42
  2. Requests are sent to the selected provider (or `auto`).
43
  3. Status is reported back for rotation and telemetry.
44
 
@@ -54,12 +54,12 @@ The app reads these at runtime β€” no extra setup required.
54
  - Provider from dropdown. Default is `auto`.
55
 
56
  ### Providers
57
- Compatible with providers configured in HF‑Inferoxy, including `auto` (default), `hf-inference`, `cerebras`, `cohere`, `groq`, `together`, `fal-ai`, `replicate`, `nebius`, `nscale`, and others.
58
 
59
  ### Security
60
  - HF OAuth validates account; org membership is enforced by Space metadata (`hf_oauth_authorized_org`).
61
  - Inference uses proxy‑managed tokens. Secrets are Space secrets.
62
- - RBAC, rotation, and quarantine handled by HF‑Inferoxy.
63
 
64
  ### Troubleshooting
65
  - 401/403: verify secrets and org access.
@@ -72,9 +72,9 @@ This project is licensed under the GNU Affero General Public License v3.0 (AGPL-
72
 
73
  ### Links
74
  - Live Space: [huggingface.co/spaces/nazdridoy/inferoxy-hub](https://huggingface.co/spaces/nazdridoy/inferoxy-hub)
75
- - HF‑Inferoxy docs: [nazdridoy.github.io/hf-inferoxy](https://nazdridoy.github.io/hf-inferoxy/)
76
  - Gradio docs: [gradio.app/docs](https://gradio.app/docs/)
77
 
78
- β€” Built with HF‑Inferoxy for intelligent token management.
79
 
80
 
 
1
  ---
2
+ title: AI-Inferoxy AI Hub
3
  emoji: πŸš€
4
  colorFrom: purple
5
  colorTo: blue
 
11
  - nazdev
12
  ---
13
 
14
+ ## πŸš€ AI‑Inferoxy AI Hub
15
 
16
+ A focused, multi‑modal AI workspace. Chat, create images, transform images, generate short videos, and synthesize speech β€” all routed through AI‑Inferoxy for secure, quota‑aware token management and provider failover.
17
 
18
  ### Highlights
19
  - Chat, Image, Image‑to‑Image, Video, and TTS in one app
 
23
 
24
  ### Quick Start (Hugging Face Space)
25
  Add Space secrets:
26
+ - `PROXY_URL`: AI‑Inferoxy server URL (e.g., `https://proxy.example.com`)
27
  - `PROXY_KEY`: API key for your proxy
28
 
29
  Org access control: instead of a custom `ALLOWED_ORGS` secret and runtime checks, configure org restrictions in README metadata using `hf_oauth_authorized_org` per HF Spaces OAuth docs. Example:
 
38
  The app reads these at runtime β€” no extra setup required.
39
 
40
  ### How It Works
41
+ 1. The app requests a valid token from AI‑Inferoxy for each call.
42
  2. Requests are sent to the selected provider (or `auto`).
43
  3. Status is reported back for rotation and telemetry.
44
 
 
54
  - Provider from dropdown. Default is `auto`.
55
 
56
  ### Providers
57
+ Compatible with providers configured in AI‑Inferoxy, including `auto` (default), `hf-inference`, `cerebras`, `cohere`, `groq`, `together`, `fal-ai`, `replicate`, `nebius`, `nscale`, and others.
58
 
59
  ### Security
60
  - HF OAuth validates account; org membership is enforced by Space metadata (`hf_oauth_authorized_org`).
61
  - Inference uses proxy‑managed tokens. Secrets are Space secrets.
62
+ - RBAC, rotation, and quarantine handled by AI‑Inferoxy.
63
 
64
  ### Troubleshooting
65
  - 401/403: verify secrets and org access.
 
72
 
73
  ### Links
74
  - Live Space: [huggingface.co/spaces/nazdridoy/inferoxy-hub](https://huggingface.co/spaces/nazdridoy/inferoxy-hub)
75
+ - AI‑Inferoxy docs: [ai-inferoxy/huggingface-hub-integration](https://nazdridoy.github.io/ai-inferoxy/)
76
  - Gradio docs: [gradio.app/docs](https://gradio.app/docs/)
77
 
78
+ β€” Built with AI‑Inferoxy for intelligent token management.
79
 
80
 
app.py CHANGED
@@ -1,5 +1,5 @@
1
  """
2
- HF-Inferoxy AI Hub - Main application entry point.
3
  A comprehensive AI platform with chat and image generation capabilities.
4
  """
5
 
@@ -24,7 +24,7 @@ def create_app():
24
  """Create and configure the main Gradio application."""
25
 
26
  # Create the main Gradio interface with tabs
27
- with gr.Blocks(title="HF-Inferoxy AI Hub", theme=get_gradio_theme()) as demo:
28
  # Sidebar with HF OAuth login/logout
29
  with gr.Sidebar():
30
  gr.LoginButton()
 
1
  """
2
+ AI-Inferoxy AI Hub - Main application entry point.
3
  A comprehensive AI platform with chat and image generation capabilities.
4
  """
5
 
 
24
  """Create and configure the main Gradio application."""
25
 
26
  # Create the main Gradio interface with tabs
27
+ with gr.Blocks(title="AI-Inferoxy AI Hub", theme=get_gradio_theme()) as demo:
28
  # Sidebar with HF OAuth login/logout
29
  with gr.Sidebar():
30
  gr.LoginButton()
chat_handler.py CHANGED
@@ -1,5 +1,5 @@
1
  """
2
- Chat functionality handler for HF-Inferoxy AI Hub.
3
  Handles chat completion requests with streaming responses.
4
  """
5
 
@@ -34,7 +34,7 @@ def chat_respond(
34
  client_name: str | None = None,
35
  ):
36
  """
37
- Chat completion function using HF-Inferoxy token management.
38
  """
39
  # Validate proxy API key
40
  is_valid, error_msg = validate_proxy_key()
@@ -46,7 +46,7 @@ def chat_respond(
46
 
47
  token_id = None
48
  try:
49
- # Get token from HF-Inferoxy proxy server with timeout handling
50
  print(f"πŸ”‘ Chat: Requesting token from proxy...")
51
  token, token_id = get_proxy_token(api_key=proxy_api_key)
52
  print(f"βœ… Chat: Got token: {token_id}")
@@ -128,7 +128,7 @@ def chat_respond(
128
 
129
  except ConnectionError as e:
130
  # Handle proxy connection errors
131
- error_msg = f"Cannot connect to HF-Inferoxy server: {str(e)}"
132
  print(f"πŸ”Œ Chat connection error: {error_msg}")
133
  if token_id:
134
  report_token_status(token_id, "error", error_msg, api_key=proxy_api_key, client_name=client_name)
 
1
  """
2
+ Chat functionality handler for AI-Inferoxy AI Hub.
3
  Handles chat completion requests with streaming responses.
4
  """
5
 
 
34
  client_name: str | None = None,
35
  ):
36
  """
37
+ Chat completion function using AI-Inferoxy token management.
38
  """
39
  # Validate proxy API key
40
  is_valid, error_msg = validate_proxy_key()
 
46
 
47
  token_id = None
48
  try:
49
+ # Get token from AI-Inferoxy proxy server with timeout handling
50
  print(f"πŸ”‘ Chat: Requesting token from proxy...")
51
  token, token_id = get_proxy_token(api_key=proxy_api_key)
52
  print(f"βœ… Chat: Got token: {token_id}")
 
128
 
129
  except ConnectionError as e:
130
  # Handle proxy connection errors
131
+ error_msg = f"Cannot connect to AI-Inferoxy server: {str(e)}"
132
  print(f"πŸ”Œ Chat connection error: {error_msg}")
133
  if token_id:
134
  report_token_status(token_id, "error", error_msg, api_key=proxy_api_key, client_name=client_name)
hf_token_utils.py CHANGED
@@ -17,7 +17,7 @@ def get_proxy_token(proxy_url: str = None, api_key: str = None) -> Tuple[str, st
17
  Get a valid token from the proxy server with timeout and retry logic.
18
 
19
  Args:
20
- proxy_url: URL of the HF-Inferoxy server (optional, will use PROXY_URL env var if not provided)
21
  api_key: Your API key for authenticating with the proxy server
22
 
23
  Returns:
@@ -46,7 +46,7 @@ def get_proxy_token(proxy_url: str = None, api_key: str = None) -> Tuple[str, st
46
  print(f"πŸ”„ Token provision attempt {attempt + 1}/{RETRY_ATTEMPTS}")
47
 
48
  response = requests.get(
49
- f"{proxy_url}/keys/provision",
50
  headers=headers,
51
  timeout=REQUEST_TIMEOUT
52
  )
@@ -73,14 +73,14 @@ def get_proxy_token(proxy_url: str = None, api_key: str = None) -> Tuple[str, st
73
  print(f"πŸ”Œ {error_msg}")
74
 
75
  if attempt == RETRY_ATTEMPTS - 1: # Last attempt
76
- raise ConnectionError(f"Cannot connect to HF-Inferoxy at {proxy_url}. Please check if the server is running.")
77
 
78
  except Timeout as e:
79
  error_msg = f"Request timeout after {REQUEST_TIMEOUT}s: {str(e)}"
80
  print(f"⏰ {error_msg}")
81
 
82
  if attempt == RETRY_ATTEMPTS - 1: # Last attempt
83
- raise TimeoutError(f"Timeout connecting to HF-Inferoxy. Server may be overloaded.")
84
 
85
  except RequestException as e:
86
  error_msg = f"Request error: {str(e)}"
@@ -109,7 +109,7 @@ def report_token_status(
109
  token_id: ID of the token to report (from get_proxy_token)
110
  status: Status to report ('success' or 'error')
111
  error: Error message if status is 'error'
112
- proxy_url: URL of the HF-Inferoxy server (optional, will use PROXY_URL env var if not provided)
113
  api_key: Your API key for authenticating with the proxy server
114
 
115
  Returns:
@@ -154,7 +154,7 @@ def report_token_status(
154
  for attempt in range(RETRY_ATTEMPTS):
155
  try:
156
  response = requests.post(
157
- f"{proxy_url}/keys/report",
158
  json=payload,
159
  headers=headers,
160
  timeout=REQUEST_TIMEOUT
 
17
  Get a valid token from the proxy server with timeout and retry logic.
18
 
19
  Args:
20
+ proxy_url: URL of the AI-Inferoxy server (optional, will use PROXY_URL env var if not provided)
21
  api_key: Your API key for authenticating with the proxy server
22
 
23
  Returns:
 
46
  print(f"πŸ”„ Token provision attempt {attempt + 1}/{RETRY_ATTEMPTS}")
47
 
48
  response = requests.get(
49
+ f"{proxy_url}/keys/provision/hf",
50
  headers=headers,
51
  timeout=REQUEST_TIMEOUT
52
  )
 
73
  print(f"πŸ”Œ {error_msg}")
74
 
75
  if attempt == RETRY_ATTEMPTS - 1: # Last attempt
76
+ raise ConnectionError(f"Cannot connect to AI-Inferoxy at {proxy_url}. Please check if the server is running.")
77
 
78
  except Timeout as e:
79
  error_msg = f"Request timeout after {REQUEST_TIMEOUT}s: {str(e)}"
80
  print(f"⏰ {error_msg}")
81
 
82
  if attempt == RETRY_ATTEMPTS - 1: # Last attempt
83
+ raise TimeoutError(f"Timeout connecting to AI-Inferoxy. Server may be overloaded.")
84
 
85
  except RequestException as e:
86
  error_msg = f"Request error: {str(e)}"
 
109
  token_id: ID of the token to report (from get_proxy_token)
110
  status: Status to report ('success' or 'error')
111
  error: Error message if status is 'error'
112
+ proxy_url: URL of the AI-Inferoxy server (optional, will use PROXY_URL env var if not provided)
113
  api_key: Your API key for authenticating with the proxy server
114
 
115
  Returns:
 
154
  for attempt in range(RETRY_ATTEMPTS):
155
  try:
156
  response = requests.post(
157
+ f"{proxy_url}/keys/report/hf",
158
  json=payload,
159
  headers=headers,
160
  timeout=REQUEST_TIMEOUT
image_handler.py CHANGED
@@ -1,5 +1,5 @@
1
  """
2
- Image generation functionality handler for HF-Inferoxy AI Hub.
3
  Handles text-to-image generation with multiple providers.
4
  """
5
 
@@ -43,7 +43,7 @@ def generate_image(
43
  client_name: str | None = None,
44
  ):
45
  """
46
- Generate an image using the specified model and provider through HF-Inferoxy.
47
  """
48
  # Validate proxy API key
49
  is_valid, error_msg = validate_proxy_key()
@@ -54,7 +54,7 @@ def generate_image(
54
 
55
  token_id = None
56
  try:
57
- # Get token from HF-Inferoxy proxy server with timeout handling
58
  print(f"πŸ”‘ Image: Requesting token from proxy...")
59
  token, token_id = get_proxy_token(api_key=proxy_api_key)
60
  print(f"βœ… Image: Got token: {token_id}")
@@ -113,7 +113,7 @@ def generate_image(
113
 
114
  except ConnectionError as e:
115
  # Handle proxy connection errors
116
- error_msg = f"Cannot connect to HF-Inferoxy server: {str(e)}"
117
  print(f"πŸ”Œ Image connection error: {error_msg}")
118
  if token_id:
119
  report_token_status(token_id, "error", error_msg, api_key=proxy_api_key, client_name=client_name)
@@ -167,7 +167,7 @@ def generate_image_to_image(
167
  client_name: str | None = None,
168
  ):
169
  """
170
- Generate an image using image-to-image generation with the specified model and provider through HF-Inferoxy.
171
  """
172
  # Validate proxy API key
173
  is_valid, error_msg = validate_proxy_key()
@@ -178,7 +178,7 @@ def generate_image_to_image(
178
 
179
  token_id = None
180
  try:
181
- # Get token from HF-Inferoxy proxy server with timeout handling
182
  print(f"πŸ”‘ Image-to-Image: Requesting token from proxy...")
183
  token, token_id = get_proxy_token(api_key=proxy_api_key)
184
  print(f"βœ… Image-to-Image: Got token: {token_id}")
@@ -237,7 +237,7 @@ def generate_image_to_image(
237
 
238
  except ConnectionError as e:
239
  # Handle proxy connection errors
240
- error_msg = f"Cannot connect to HF-Inferoxy server: {str(e)}"
241
  print(f"πŸ”Œ Image-to-Image connection error: {error_msg}")
242
  if token_id:
243
  report_token_status(token_id, "error", error_msg, api_key=proxy_api_key, client_name=client_name)
 
1
  """
2
+ Image generation functionality handler for AI-Inferoxy AI Hub.
3
  Handles text-to-image generation with multiple providers.
4
  """
5
 
 
43
  client_name: str | None = None,
44
  ):
45
  """
46
+ Generate an image using the specified model and provider through AI-Inferoxy.
47
  """
48
  # Validate proxy API key
49
  is_valid, error_msg = validate_proxy_key()
 
54
 
55
  token_id = None
56
  try:
57
+ # Get token from AI-Inferoxy proxy server with timeout handling
58
  print(f"πŸ”‘ Image: Requesting token from proxy...")
59
  token, token_id = get_proxy_token(api_key=proxy_api_key)
60
  print(f"βœ… Image: Got token: {token_id}")
 
113
 
114
  except ConnectionError as e:
115
  # Handle proxy connection errors
116
+ error_msg = f"Cannot connect to AI-Inferoxy server: {str(e)}"
117
  print(f"πŸ”Œ Image connection error: {error_msg}")
118
  if token_id:
119
  report_token_status(token_id, "error", error_msg, api_key=proxy_api_key, client_name=client_name)
 
167
  client_name: str | None = None,
168
  ):
169
  """
170
+ Generate an image using image-to-image generation with the specified model and provider through AI-Inferoxy.
171
  """
172
  # Validate proxy API key
173
  is_valid, error_msg = validate_proxy_key()
 
178
 
179
  token_id = None
180
  try:
181
+ # Get token from AI-Inferoxy proxy server with timeout handling
182
  print(f"πŸ”‘ Image-to-Image: Requesting token from proxy...")
183
  token, token_id = get_proxy_token(api_key=proxy_api_key)
184
  print(f"βœ… Image-to-Image: Got token: {token_id}")
 
237
 
238
  except ConnectionError as e:
239
  # Handle proxy connection errors
240
+ error_msg = f"Cannot connect to AI-Inferoxy server: {str(e)}"
241
  print(f"πŸ”Œ Image-to-Image connection error: {error_msg}")
242
  if token_id:
243
  report_token_status(token_id, "error", error_msg, api_key=proxy_api_key, client_name=client_name)
tts_handler.py CHANGED
@@ -1,5 +1,5 @@
1
  """
2
- Text-to-speech functionality handler for HF-Inferoxy AI Hub.
3
  Handles text-to-speech generation with multiple providers.
4
  """
5
 
@@ -37,7 +37,7 @@ def generate_text_to_speech(
37
  client_name: str | None = None,
38
  ):
39
  """
40
- Generate speech from text using the specified model and provider through HF-Inferoxy.
41
  """
42
  # Validate proxy API key
43
  is_valid, error_msg = validate_proxy_key()
@@ -48,7 +48,7 @@ def generate_text_to_speech(
48
 
49
  token_id = None
50
  try:
51
- # Get token from HF-Inferoxy proxy server with timeout handling
52
  print(f"πŸ”‘ TTS: Requesting token from proxy...")
53
  token, token_id = get_proxy_token(api_key=proxy_api_key)
54
  print(f"βœ… TTS: Got token: {token_id}")
@@ -115,7 +115,7 @@ def generate_text_to_speech(
115
 
116
  except ConnectionError as e:
117
  # Handle proxy connection errors
118
- error_msg = f"Cannot connect to HF-Inferoxy server: {str(e)}"
119
  print(f"πŸ”Œ TTS connection error: {error_msg}")
120
  if token_id:
121
  report_token_status(token_id, "error", error_msg, api_key=proxy_api_key, client_name=client_name)
 
1
  """
2
+ Text-to-speech functionality handler for AI-Inferoxy AI Hub.
3
  Handles text-to-speech generation with multiple providers.
4
  """
5
 
 
37
  client_name: str | None = None,
38
  ):
39
  """
40
+ Generate speech from text using the specified model and provider through AI-Inferoxy.
41
  """
42
  # Validate proxy API key
43
  is_valid, error_msg = validate_proxy_key()
 
48
 
49
  token_id = None
50
  try:
51
+ # Get token from AI-Inferoxy proxy server with timeout handling
52
  print(f"πŸ”‘ TTS: Requesting token from proxy...")
53
  token, token_id = get_proxy_token(api_key=proxy_api_key)
54
  print(f"βœ… TTS: Got token: {token_id}")
 
115
 
116
  except ConnectionError as e:
117
  # Handle proxy connection errors
118
+ error_msg = f"Cannot connect to AI-Inferoxy server: {str(e)}"
119
  print(f"πŸ”Œ TTS connection error: {error_msg}")
120
  if token_id:
121
  report_token_status(token_id, "error", error_msg, api_key=proxy_api_key, client_name=client_name)
ui_components.py CHANGED
@@ -1,5 +1,5 @@
1
  """
2
- UI components for HF-Inferoxy AI Hub.
3
  Contains functions to create different sections of the Gradio interface.
4
  """
5
 
@@ -702,9 +702,9 @@ def create_image_examples(img_prompt):
702
  def create_main_header():
703
  """Create the main header for the application."""
704
  gr.Markdown("""
705
- # πŸš€ HF-Inferoxy AI Hub
706
 
707
- A comprehensive AI platform combining chat, image generation, image-to-image, text-to-video, and text-to-speech capabilities with intelligent token management through HF-Inferoxy.
708
 
709
  **Features:**
710
  - πŸ’¬ **Smart Chat**: Conversational AI with streaming responses
@@ -724,7 +724,7 @@ def create_footer():
724
  ---
725
  ### πŸ”— Links
726
  - **Project repo**: https://github.com/nazdridoy/inferoxy-hub
727
- - **HF‑Inferoxy docs**: https://nazdridoy.github.io/hf-inferoxy/
728
  - **License**: https://github.com/nazdridoy/inferoxy-hub/blob/main/LICENSE
729
  """
730
  )
 
1
  """
2
+ UI components for AI-Inferoxy AI Hub.
3
  Contains functions to create different sections of the Gradio interface.
4
  """
5
 
 
702
  def create_main_header():
703
  """Create the main header for the application."""
704
  gr.Markdown("""
705
+ # πŸš€ AI-Inferoxy AI Hub
706
 
707
+ A comprehensive AI platform combining chat, image generation, image-to-image, text-to-video, and text-to-speech capabilities with intelligent token management through AI-Inferoxy.
708
 
709
  **Features:**
710
  - πŸ’¬ **Smart Chat**: Conversational AI with streaming responses
 
724
  ---
725
  ### πŸ”— Links
726
  - **Project repo**: https://github.com/nazdridoy/inferoxy-hub
727
+ - **AI‑Inferoxy docs**: https://nazdridoy.github.io/ai-inferoxy/
728
  - **License**: https://github.com/nazdridoy/inferoxy-hub/blob/main/LICENSE
729
  """
730
  )
utils.py CHANGED
@@ -1,5 +1,5 @@
1
  """
2
- Utility functions and constants for HF-Inferoxy AI Hub.
3
  Contains configuration constants and helper functions.
4
  """
5
 
 
1
  """
2
+ Utility functions and constants for AI-Inferoxy AI Hub.
3
  Contains configuration constants and helper functions.
4
  """
5
 
video_handler.py CHANGED
@@ -1,5 +1,5 @@
1
  """
2
- Text-to-video functionality handler for HF-Inferoxy AI Hub.
3
  Handles text-to-video generation with multiple providers.
4
  """
5
 
@@ -33,7 +33,7 @@ def generate_video(
33
  client_name: str | None = None,
34
  ):
35
  """
36
- Generate a video using the specified model and provider through HF-Inferoxy.
37
  Returns (video_bytes_or_url, status_message)
38
  """
39
  # Validate proxy API key
@@ -45,7 +45,7 @@ def generate_video(
45
 
46
  token_id = None
47
  try:
48
- # Get token from HF-Inferoxy proxy server with timeout handling
49
  print(f"πŸ”‘ Video: Requesting token from proxy...")
50
  token, token_id = get_proxy_token(api_key=proxy_api_key)
51
  print(f"βœ… Video: Got token: {token_id}")
@@ -97,7 +97,7 @@ def generate_video(
97
  return video_output, format_success_message("Video generated", f"using {model_name} on {provider}")
98
 
99
  except ConnectionError as e:
100
- error_msg = f"Cannot connect to HF-Inferoxy server: {str(e)}"
101
  print(f"πŸ”Œ Video connection error: {error_msg}")
102
  if token_id:
103
  report_token_status(token_id, "error", error_msg, api_key=proxy_api_key, client_name=client_name)
 
1
  """
2
+ Text-to-video functionality handler for AI-Inferoxy AI Hub.
3
  Handles text-to-video generation with multiple providers.
4
  """
5
 
 
33
  client_name: str | None = None,
34
  ):
35
  """
36
+ Generate a video using the specified model and provider through AI-Inferoxy.
37
  Returns (video_bytes_or_url, status_message)
38
  """
39
  # Validate proxy API key
 
45
 
46
  token_id = None
47
  try:
48
+ # Get token from AI-Inferoxy proxy server with timeout handling
49
  print(f"πŸ”‘ Video: Requesting token from proxy...")
50
  token, token_id = get_proxy_token(api_key=proxy_api_key)
51
  print(f"βœ… Video: Got token: {token_id}")
 
97
  return video_output, format_success_message("Video generated", f"using {model_name} on {provider}")
98
 
99
  except ConnectionError as e:
100
+ error_msg = f"Cannot connect to AI-Inferoxy server: {str(e)}"
101
  print(f"πŸ”Œ Video connection error: {error_msg}")
102
  if token_id:
103
  report_token_status(token_id, "error", error_msg, api_key=proxy_api_key, client_name=client_name)