Spaces:
Running
refactor(core): rebrand to AI-Inferoxy and update API endpoints
Browse files- [api] Modify token provisioning endpoint to `/keys/provision/hf` (hf_token_utils.py:get_proxy_token():49)
- [api] Modify token status report endpoint to `/keys/report/hf` (hf_token_utils.py:report_token_status():156)
- [ui] Update Gradio application block title to "AI-Inferoxy AI Hub" (app.py:create_app():27)
- [ui] Change main header and descriptive text to reference "AI-Inferoxy" (ui_components.py:create_main_header():707,709)
- [docs] Update footer documentation link to "AIβInferoxy docs" (ui_components.py:create_footer():728)
- [docs] Replace "HF-Inferoxy" with "AI-Inferoxy" in README.md titles, headings, descriptions, and links (README.md:2,11,13,27,39,56,60,74)
- [refactor] Update docstring, proxy server comment, and error message (chat_handler.py:chat_respond():37,50,130)
- [refactor] Update docstring and connection error messages (hf_token_utils.py:get_proxy_token():19,76,83)
- [refactor] Update docstrings, proxy comments, and error messages (image_handler.py:generate_image():45,57,115, generate_image_to_image():169,181,239)
- [refactor] Update docstring, proxy comment, and error message (tts_handler.py:generate_text_to_speech():39,51,117)
- [refactor] Update docstring, proxy comment, and error message (video_handler.py:generate_video():35,47,100)
- [refactor] Replace "HF-Inferoxy" with "AI-Inferoxy" in comments, docstrings, and error messages (utils.py)
- README.md +9 -9
- app.py +2 -2
- chat_handler.py +4 -4
- hf_token_utils.py +6 -6
- image_handler.py +7 -7
- tts_handler.py +4 -4
- ui_components.py +4 -4
- utils.py +1 -1
- video_handler.py +4 -4
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
title:
|
3 |
emoji: π
|
4 |
colorFrom: purple
|
5 |
colorTo: blue
|
@@ -11,9 +11,9 @@ hf_oauth_authorized_org:
|
|
11 |
- nazdev
|
12 |
---
|
13 |
|
14 |
-
## π
|
15 |
|
16 |
-
A focused, multiβmodal AI workspace. Chat, create images, transform images, generate short videos, and synthesize speech β all routed through
|
17 |
|
18 |
### Highlights
|
19 |
- Chat, Image, ImageβtoβImage, Video, and TTS in one app
|
@@ -23,7 +23,7 @@ A focused, multiβmodal AI workspace. Chat, create images, transform images, ge
|
|
23 |
|
24 |
### Quick Start (Hugging Face Space)
|
25 |
Add Space secrets:
|
26 |
-
- `PROXY_URL`:
|
27 |
- `PROXY_KEY`: API key for your proxy
|
28 |
|
29 |
Org access control: instead of a custom `ALLOWED_ORGS` secret and runtime checks, configure org restrictions in README metadata using `hf_oauth_authorized_org` per HF Spaces OAuth docs. Example:
|
@@ -38,7 +38,7 @@ hf_oauth_authorized_org:
|
|
38 |
The app reads these at runtime β no extra setup required.
|
39 |
|
40 |
### How It Works
|
41 |
-
1. The app requests a valid token from
|
42 |
2. Requests are sent to the selected provider (or `auto`).
|
43 |
3. Status is reported back for rotation and telemetry.
|
44 |
|
@@ -54,12 +54,12 @@ The app reads these at runtime β no extra setup required.
|
|
54 |
- Provider from dropdown. Default is `auto`.
|
55 |
|
56 |
### Providers
|
57 |
-
Compatible with providers configured in
|
58 |
|
59 |
### Security
|
60 |
- HF OAuth validates account; org membership is enforced by Space metadata (`hf_oauth_authorized_org`).
|
61 |
- Inference uses proxyβmanaged tokens. Secrets are Space secrets.
|
62 |
-
- RBAC, rotation, and quarantine handled by
|
63 |
|
64 |
### Troubleshooting
|
65 |
- 401/403: verify secrets and org access.
|
@@ -72,9 +72,9 @@ This project is licensed under the GNU Affero General Public License v3.0 (AGPL-
|
|
72 |
|
73 |
### Links
|
74 |
- Live Space: [huggingface.co/spaces/nazdridoy/inferoxy-hub](https://huggingface.co/spaces/nazdridoy/inferoxy-hub)
|
75 |
-
-
|
76 |
- Gradio docs: [gradio.app/docs](https://gradio.app/docs/)
|
77 |
|
78 |
-
β Built with
|
79 |
|
80 |
|
|
|
1 |
---
|
2 |
+
title: AI-Inferoxy AI Hub
|
3 |
emoji: π
|
4 |
colorFrom: purple
|
5 |
colorTo: blue
|
|
|
11 |
- nazdev
|
12 |
---
|
13 |
|
14 |
+
## π AIβInferoxy AI Hub
|
15 |
|
16 |
+
A focused, multiβmodal AI workspace. Chat, create images, transform images, generate short videos, and synthesize speech β all routed through AIβInferoxy for secure, quotaβaware token management and provider failover.
|
17 |
|
18 |
### Highlights
|
19 |
- Chat, Image, ImageβtoβImage, Video, and TTS in one app
|
|
|
23 |
|
24 |
### Quick Start (Hugging Face Space)
|
25 |
Add Space secrets:
|
26 |
+
- `PROXY_URL`: AIβInferoxy server URL (e.g., `https://proxy.example.com`)
|
27 |
- `PROXY_KEY`: API key for your proxy
|
28 |
|
29 |
Org access control: instead of a custom `ALLOWED_ORGS` secret and runtime checks, configure org restrictions in README metadata using `hf_oauth_authorized_org` per HF Spaces OAuth docs. Example:
|
|
|
38 |
The app reads these at runtime β no extra setup required.
|
39 |
|
40 |
### How It Works
|
41 |
+
1. The app requests a valid token from AIβInferoxy for each call.
|
42 |
2. Requests are sent to the selected provider (or `auto`).
|
43 |
3. Status is reported back for rotation and telemetry.
|
44 |
|
|
|
54 |
- Provider from dropdown. Default is `auto`.
|
55 |
|
56 |
### Providers
|
57 |
+
Compatible with providers configured in AIβInferoxy, including `auto` (default), `hf-inference`, `cerebras`, `cohere`, `groq`, `together`, `fal-ai`, `replicate`, `nebius`, `nscale`, and others.
|
58 |
|
59 |
### Security
|
60 |
- HF OAuth validates account; org membership is enforced by Space metadata (`hf_oauth_authorized_org`).
|
61 |
- Inference uses proxyβmanaged tokens. Secrets are Space secrets.
|
62 |
+
- RBAC, rotation, and quarantine handled by AIβInferoxy.
|
63 |
|
64 |
### Troubleshooting
|
65 |
- 401/403: verify secrets and org access.
|
|
|
72 |
|
73 |
### Links
|
74 |
- Live Space: [huggingface.co/spaces/nazdridoy/inferoxy-hub](https://huggingface.co/spaces/nazdridoy/inferoxy-hub)
|
75 |
+
- AIβInferoxy docs: [ai-inferoxy/huggingface-hub-integration](https://nazdridoy.github.io/ai-inferoxy/)
|
76 |
- Gradio docs: [gradio.app/docs](https://gradio.app/docs/)
|
77 |
|
78 |
+
β Built with AIβInferoxy for intelligent token management.
|
79 |
|
80 |
|
@@ -1,5 +1,5 @@
|
|
1 |
"""
|
2 |
-
|
3 |
A comprehensive AI platform with chat and image generation capabilities.
|
4 |
"""
|
5 |
|
@@ -24,7 +24,7 @@ def create_app():
|
|
24 |
"""Create and configure the main Gradio application."""
|
25 |
|
26 |
# Create the main Gradio interface with tabs
|
27 |
-
with gr.Blocks(title="
|
28 |
# Sidebar with HF OAuth login/logout
|
29 |
with gr.Sidebar():
|
30 |
gr.LoginButton()
|
|
|
1 |
"""
|
2 |
+
AI-Inferoxy AI Hub - Main application entry point.
|
3 |
A comprehensive AI platform with chat and image generation capabilities.
|
4 |
"""
|
5 |
|
|
|
24 |
"""Create and configure the main Gradio application."""
|
25 |
|
26 |
# Create the main Gradio interface with tabs
|
27 |
+
with gr.Blocks(title="AI-Inferoxy AI Hub", theme=get_gradio_theme()) as demo:
|
28 |
# Sidebar with HF OAuth login/logout
|
29 |
with gr.Sidebar():
|
30 |
gr.LoginButton()
|
@@ -1,5 +1,5 @@
|
|
1 |
"""
|
2 |
-
Chat functionality handler for
|
3 |
Handles chat completion requests with streaming responses.
|
4 |
"""
|
5 |
|
@@ -34,7 +34,7 @@ def chat_respond(
|
|
34 |
client_name: str | None = None,
|
35 |
):
|
36 |
"""
|
37 |
-
Chat completion function using
|
38 |
"""
|
39 |
# Validate proxy API key
|
40 |
is_valid, error_msg = validate_proxy_key()
|
@@ -46,7 +46,7 @@ def chat_respond(
|
|
46 |
|
47 |
token_id = None
|
48 |
try:
|
49 |
-
# Get token from
|
50 |
print(f"π Chat: Requesting token from proxy...")
|
51 |
token, token_id = get_proxy_token(api_key=proxy_api_key)
|
52 |
print(f"β
Chat: Got token: {token_id}")
|
@@ -128,7 +128,7 @@ def chat_respond(
|
|
128 |
|
129 |
except ConnectionError as e:
|
130 |
# Handle proxy connection errors
|
131 |
-
error_msg = f"Cannot connect to
|
132 |
print(f"π Chat connection error: {error_msg}")
|
133 |
if token_id:
|
134 |
report_token_status(token_id, "error", error_msg, api_key=proxy_api_key, client_name=client_name)
|
|
|
1 |
"""
|
2 |
+
Chat functionality handler for AI-Inferoxy AI Hub.
|
3 |
Handles chat completion requests with streaming responses.
|
4 |
"""
|
5 |
|
|
|
34 |
client_name: str | None = None,
|
35 |
):
|
36 |
"""
|
37 |
+
Chat completion function using AI-Inferoxy token management.
|
38 |
"""
|
39 |
# Validate proxy API key
|
40 |
is_valid, error_msg = validate_proxy_key()
|
|
|
46 |
|
47 |
token_id = None
|
48 |
try:
|
49 |
+
# Get token from AI-Inferoxy proxy server with timeout handling
|
50 |
print(f"π Chat: Requesting token from proxy...")
|
51 |
token, token_id = get_proxy_token(api_key=proxy_api_key)
|
52 |
print(f"β
Chat: Got token: {token_id}")
|
|
|
128 |
|
129 |
except ConnectionError as e:
|
130 |
# Handle proxy connection errors
|
131 |
+
error_msg = f"Cannot connect to AI-Inferoxy server: {str(e)}"
|
132 |
print(f"π Chat connection error: {error_msg}")
|
133 |
if token_id:
|
134 |
report_token_status(token_id, "error", error_msg, api_key=proxy_api_key, client_name=client_name)
|
@@ -17,7 +17,7 @@ def get_proxy_token(proxy_url: str = None, api_key: str = None) -> Tuple[str, st
|
|
17 |
Get a valid token from the proxy server with timeout and retry logic.
|
18 |
|
19 |
Args:
|
20 |
-
proxy_url: URL of the
|
21 |
api_key: Your API key for authenticating with the proxy server
|
22 |
|
23 |
Returns:
|
@@ -46,7 +46,7 @@ def get_proxy_token(proxy_url: str = None, api_key: str = None) -> Tuple[str, st
|
|
46 |
print(f"π Token provision attempt {attempt + 1}/{RETRY_ATTEMPTS}")
|
47 |
|
48 |
response = requests.get(
|
49 |
-
f"{proxy_url}/keys/provision",
|
50 |
headers=headers,
|
51 |
timeout=REQUEST_TIMEOUT
|
52 |
)
|
@@ -73,14 +73,14 @@ def get_proxy_token(proxy_url: str = None, api_key: str = None) -> Tuple[str, st
|
|
73 |
print(f"π {error_msg}")
|
74 |
|
75 |
if attempt == RETRY_ATTEMPTS - 1: # Last attempt
|
76 |
-
raise ConnectionError(f"Cannot connect to
|
77 |
|
78 |
except Timeout as e:
|
79 |
error_msg = f"Request timeout after {REQUEST_TIMEOUT}s: {str(e)}"
|
80 |
print(f"β° {error_msg}")
|
81 |
|
82 |
if attempt == RETRY_ATTEMPTS - 1: # Last attempt
|
83 |
-
raise TimeoutError(f"Timeout connecting to
|
84 |
|
85 |
except RequestException as e:
|
86 |
error_msg = f"Request error: {str(e)}"
|
@@ -109,7 +109,7 @@ def report_token_status(
|
|
109 |
token_id: ID of the token to report (from get_proxy_token)
|
110 |
status: Status to report ('success' or 'error')
|
111 |
error: Error message if status is 'error'
|
112 |
-
proxy_url: URL of the
|
113 |
api_key: Your API key for authenticating with the proxy server
|
114 |
|
115 |
Returns:
|
@@ -154,7 +154,7 @@ def report_token_status(
|
|
154 |
for attempt in range(RETRY_ATTEMPTS):
|
155 |
try:
|
156 |
response = requests.post(
|
157 |
-
f"{proxy_url}/keys/report",
|
158 |
json=payload,
|
159 |
headers=headers,
|
160 |
timeout=REQUEST_TIMEOUT
|
|
|
17 |
Get a valid token from the proxy server with timeout and retry logic.
|
18 |
|
19 |
Args:
|
20 |
+
proxy_url: URL of the AI-Inferoxy server (optional, will use PROXY_URL env var if not provided)
|
21 |
api_key: Your API key for authenticating with the proxy server
|
22 |
|
23 |
Returns:
|
|
|
46 |
print(f"π Token provision attempt {attempt + 1}/{RETRY_ATTEMPTS}")
|
47 |
|
48 |
response = requests.get(
|
49 |
+
f"{proxy_url}/keys/provision/hf",
|
50 |
headers=headers,
|
51 |
timeout=REQUEST_TIMEOUT
|
52 |
)
|
|
|
73 |
print(f"π {error_msg}")
|
74 |
|
75 |
if attempt == RETRY_ATTEMPTS - 1: # Last attempt
|
76 |
+
raise ConnectionError(f"Cannot connect to AI-Inferoxy at {proxy_url}. Please check if the server is running.")
|
77 |
|
78 |
except Timeout as e:
|
79 |
error_msg = f"Request timeout after {REQUEST_TIMEOUT}s: {str(e)}"
|
80 |
print(f"β° {error_msg}")
|
81 |
|
82 |
if attempt == RETRY_ATTEMPTS - 1: # Last attempt
|
83 |
+
raise TimeoutError(f"Timeout connecting to AI-Inferoxy. Server may be overloaded.")
|
84 |
|
85 |
except RequestException as e:
|
86 |
error_msg = f"Request error: {str(e)}"
|
|
|
109 |
token_id: ID of the token to report (from get_proxy_token)
|
110 |
status: Status to report ('success' or 'error')
|
111 |
error: Error message if status is 'error'
|
112 |
+
proxy_url: URL of the AI-Inferoxy server (optional, will use PROXY_URL env var if not provided)
|
113 |
api_key: Your API key for authenticating with the proxy server
|
114 |
|
115 |
Returns:
|
|
|
154 |
for attempt in range(RETRY_ATTEMPTS):
|
155 |
try:
|
156 |
response = requests.post(
|
157 |
+
f"{proxy_url}/keys/report/hf",
|
158 |
json=payload,
|
159 |
headers=headers,
|
160 |
timeout=REQUEST_TIMEOUT
|
@@ -1,5 +1,5 @@
|
|
1 |
"""
|
2 |
-
Image generation functionality handler for
|
3 |
Handles text-to-image generation with multiple providers.
|
4 |
"""
|
5 |
|
@@ -43,7 +43,7 @@ def generate_image(
|
|
43 |
client_name: str | None = None,
|
44 |
):
|
45 |
"""
|
46 |
-
Generate an image using the specified model and provider through
|
47 |
"""
|
48 |
# Validate proxy API key
|
49 |
is_valid, error_msg = validate_proxy_key()
|
@@ -54,7 +54,7 @@ def generate_image(
|
|
54 |
|
55 |
token_id = None
|
56 |
try:
|
57 |
-
# Get token from
|
58 |
print(f"π Image: Requesting token from proxy...")
|
59 |
token, token_id = get_proxy_token(api_key=proxy_api_key)
|
60 |
print(f"β
Image: Got token: {token_id}")
|
@@ -113,7 +113,7 @@ def generate_image(
|
|
113 |
|
114 |
except ConnectionError as e:
|
115 |
# Handle proxy connection errors
|
116 |
-
error_msg = f"Cannot connect to
|
117 |
print(f"π Image connection error: {error_msg}")
|
118 |
if token_id:
|
119 |
report_token_status(token_id, "error", error_msg, api_key=proxy_api_key, client_name=client_name)
|
@@ -167,7 +167,7 @@ def generate_image_to_image(
|
|
167 |
client_name: str | None = None,
|
168 |
):
|
169 |
"""
|
170 |
-
Generate an image using image-to-image generation with the specified model and provider through
|
171 |
"""
|
172 |
# Validate proxy API key
|
173 |
is_valid, error_msg = validate_proxy_key()
|
@@ -178,7 +178,7 @@ def generate_image_to_image(
|
|
178 |
|
179 |
token_id = None
|
180 |
try:
|
181 |
-
# Get token from
|
182 |
print(f"π Image-to-Image: Requesting token from proxy...")
|
183 |
token, token_id = get_proxy_token(api_key=proxy_api_key)
|
184 |
print(f"β
Image-to-Image: Got token: {token_id}")
|
@@ -237,7 +237,7 @@ def generate_image_to_image(
|
|
237 |
|
238 |
except ConnectionError as e:
|
239 |
# Handle proxy connection errors
|
240 |
-
error_msg = f"Cannot connect to
|
241 |
print(f"π Image-to-Image connection error: {error_msg}")
|
242 |
if token_id:
|
243 |
report_token_status(token_id, "error", error_msg, api_key=proxy_api_key, client_name=client_name)
|
|
|
1 |
"""
|
2 |
+
Image generation functionality handler for AI-Inferoxy AI Hub.
|
3 |
Handles text-to-image generation with multiple providers.
|
4 |
"""
|
5 |
|
|
|
43 |
client_name: str | None = None,
|
44 |
):
|
45 |
"""
|
46 |
+
Generate an image using the specified model and provider through AI-Inferoxy.
|
47 |
"""
|
48 |
# Validate proxy API key
|
49 |
is_valid, error_msg = validate_proxy_key()
|
|
|
54 |
|
55 |
token_id = None
|
56 |
try:
|
57 |
+
# Get token from AI-Inferoxy proxy server with timeout handling
|
58 |
print(f"π Image: Requesting token from proxy...")
|
59 |
token, token_id = get_proxy_token(api_key=proxy_api_key)
|
60 |
print(f"β
Image: Got token: {token_id}")
|
|
|
113 |
|
114 |
except ConnectionError as e:
|
115 |
# Handle proxy connection errors
|
116 |
+
error_msg = f"Cannot connect to AI-Inferoxy server: {str(e)}"
|
117 |
print(f"π Image connection error: {error_msg}")
|
118 |
if token_id:
|
119 |
report_token_status(token_id, "error", error_msg, api_key=proxy_api_key, client_name=client_name)
|
|
|
167 |
client_name: str | None = None,
|
168 |
):
|
169 |
"""
|
170 |
+
Generate an image using image-to-image generation with the specified model and provider through AI-Inferoxy.
|
171 |
"""
|
172 |
# Validate proxy API key
|
173 |
is_valid, error_msg = validate_proxy_key()
|
|
|
178 |
|
179 |
token_id = None
|
180 |
try:
|
181 |
+
# Get token from AI-Inferoxy proxy server with timeout handling
|
182 |
print(f"π Image-to-Image: Requesting token from proxy...")
|
183 |
token, token_id = get_proxy_token(api_key=proxy_api_key)
|
184 |
print(f"β
Image-to-Image: Got token: {token_id}")
|
|
|
237 |
|
238 |
except ConnectionError as e:
|
239 |
# Handle proxy connection errors
|
240 |
+
error_msg = f"Cannot connect to AI-Inferoxy server: {str(e)}"
|
241 |
print(f"π Image-to-Image connection error: {error_msg}")
|
242 |
if token_id:
|
243 |
report_token_status(token_id, "error", error_msg, api_key=proxy_api_key, client_name=client_name)
|
@@ -1,5 +1,5 @@
|
|
1 |
"""
|
2 |
-
Text-to-speech functionality handler for
|
3 |
Handles text-to-speech generation with multiple providers.
|
4 |
"""
|
5 |
|
@@ -37,7 +37,7 @@ def generate_text_to_speech(
|
|
37 |
client_name: str | None = None,
|
38 |
):
|
39 |
"""
|
40 |
-
Generate speech from text using the specified model and provider through
|
41 |
"""
|
42 |
# Validate proxy API key
|
43 |
is_valid, error_msg = validate_proxy_key()
|
@@ -48,7 +48,7 @@ def generate_text_to_speech(
|
|
48 |
|
49 |
token_id = None
|
50 |
try:
|
51 |
-
# Get token from
|
52 |
print(f"π TTS: Requesting token from proxy...")
|
53 |
token, token_id = get_proxy_token(api_key=proxy_api_key)
|
54 |
print(f"β
TTS: Got token: {token_id}")
|
@@ -115,7 +115,7 @@ def generate_text_to_speech(
|
|
115 |
|
116 |
except ConnectionError as e:
|
117 |
# Handle proxy connection errors
|
118 |
-
error_msg = f"Cannot connect to
|
119 |
print(f"π TTS connection error: {error_msg}")
|
120 |
if token_id:
|
121 |
report_token_status(token_id, "error", error_msg, api_key=proxy_api_key, client_name=client_name)
|
|
|
1 |
"""
|
2 |
+
Text-to-speech functionality handler for AI-Inferoxy AI Hub.
|
3 |
Handles text-to-speech generation with multiple providers.
|
4 |
"""
|
5 |
|
|
|
37 |
client_name: str | None = None,
|
38 |
):
|
39 |
"""
|
40 |
+
Generate speech from text using the specified model and provider through AI-Inferoxy.
|
41 |
"""
|
42 |
# Validate proxy API key
|
43 |
is_valid, error_msg = validate_proxy_key()
|
|
|
48 |
|
49 |
token_id = None
|
50 |
try:
|
51 |
+
# Get token from AI-Inferoxy proxy server with timeout handling
|
52 |
print(f"π TTS: Requesting token from proxy...")
|
53 |
token, token_id = get_proxy_token(api_key=proxy_api_key)
|
54 |
print(f"β
TTS: Got token: {token_id}")
|
|
|
115 |
|
116 |
except ConnectionError as e:
|
117 |
# Handle proxy connection errors
|
118 |
+
error_msg = f"Cannot connect to AI-Inferoxy server: {str(e)}"
|
119 |
print(f"π TTS connection error: {error_msg}")
|
120 |
if token_id:
|
121 |
report_token_status(token_id, "error", error_msg, api_key=proxy_api_key, client_name=client_name)
|
@@ -1,5 +1,5 @@
|
|
1 |
"""
|
2 |
-
UI components for
|
3 |
Contains functions to create different sections of the Gradio interface.
|
4 |
"""
|
5 |
|
@@ -702,9 +702,9 @@ def create_image_examples(img_prompt):
|
|
702 |
def create_main_header():
|
703 |
"""Create the main header for the application."""
|
704 |
gr.Markdown("""
|
705 |
-
# π
|
706 |
|
707 |
-
A comprehensive AI platform combining chat, image generation, image-to-image, text-to-video, and text-to-speech capabilities with intelligent token management through
|
708 |
|
709 |
**Features:**
|
710 |
- π¬ **Smart Chat**: Conversational AI with streaming responses
|
@@ -724,7 +724,7 @@ def create_footer():
|
|
724 |
---
|
725 |
### π Links
|
726 |
- **Project repo**: https://github.com/nazdridoy/inferoxy-hub
|
727 |
-
- **
|
728 |
- **License**: https://github.com/nazdridoy/inferoxy-hub/blob/main/LICENSE
|
729 |
"""
|
730 |
)
|
|
|
1 |
"""
|
2 |
+
UI components for AI-Inferoxy AI Hub.
|
3 |
Contains functions to create different sections of the Gradio interface.
|
4 |
"""
|
5 |
|
|
|
702 |
def create_main_header():
|
703 |
"""Create the main header for the application."""
|
704 |
gr.Markdown("""
|
705 |
+
# π AI-Inferoxy AI Hub
|
706 |
|
707 |
+
A comprehensive AI platform combining chat, image generation, image-to-image, text-to-video, and text-to-speech capabilities with intelligent token management through AI-Inferoxy.
|
708 |
|
709 |
**Features:**
|
710 |
- π¬ **Smart Chat**: Conversational AI with streaming responses
|
|
|
724 |
---
|
725 |
### π Links
|
726 |
- **Project repo**: https://github.com/nazdridoy/inferoxy-hub
|
727 |
+
- **AIβInferoxy docs**: https://nazdridoy.github.io/ai-inferoxy/
|
728 |
- **License**: https://github.com/nazdridoy/inferoxy-hub/blob/main/LICENSE
|
729 |
"""
|
730 |
)
|
@@ -1,5 +1,5 @@
|
|
1 |
"""
|
2 |
-
Utility functions and constants for
|
3 |
Contains configuration constants and helper functions.
|
4 |
"""
|
5 |
|
|
|
1 |
"""
|
2 |
+
Utility functions and constants for AI-Inferoxy AI Hub.
|
3 |
Contains configuration constants and helper functions.
|
4 |
"""
|
5 |
|
@@ -1,5 +1,5 @@
|
|
1 |
"""
|
2 |
-
Text-to-video functionality handler for
|
3 |
Handles text-to-video generation with multiple providers.
|
4 |
"""
|
5 |
|
@@ -33,7 +33,7 @@ def generate_video(
|
|
33 |
client_name: str | None = None,
|
34 |
):
|
35 |
"""
|
36 |
-
Generate a video using the specified model and provider through
|
37 |
Returns (video_bytes_or_url, status_message)
|
38 |
"""
|
39 |
# Validate proxy API key
|
@@ -45,7 +45,7 @@ def generate_video(
|
|
45 |
|
46 |
token_id = None
|
47 |
try:
|
48 |
-
# Get token from
|
49 |
print(f"π Video: Requesting token from proxy...")
|
50 |
token, token_id = get_proxy_token(api_key=proxy_api_key)
|
51 |
print(f"β
Video: Got token: {token_id}")
|
@@ -97,7 +97,7 @@ def generate_video(
|
|
97 |
return video_output, format_success_message("Video generated", f"using {model_name} on {provider}")
|
98 |
|
99 |
except ConnectionError as e:
|
100 |
-
error_msg = f"Cannot connect to
|
101 |
print(f"π Video connection error: {error_msg}")
|
102 |
if token_id:
|
103 |
report_token_status(token_id, "error", error_msg, api_key=proxy_api_key, client_name=client_name)
|
|
|
1 |
"""
|
2 |
+
Text-to-video functionality handler for AI-Inferoxy AI Hub.
|
3 |
Handles text-to-video generation with multiple providers.
|
4 |
"""
|
5 |
|
|
|
33 |
client_name: str | None = None,
|
34 |
):
|
35 |
"""
|
36 |
+
Generate a video using the specified model and provider through AI-Inferoxy.
|
37 |
Returns (video_bytes_or_url, status_message)
|
38 |
"""
|
39 |
# Validate proxy API key
|
|
|
45 |
|
46 |
token_id = None
|
47 |
try:
|
48 |
+
# Get token from AI-Inferoxy proxy server with timeout handling
|
49 |
print(f"π Video: Requesting token from proxy...")
|
50 |
token, token_id = get_proxy_token(api_key=proxy_api_key)
|
51 |
print(f"β
Video: Got token: {token_id}")
|
|
|
97 |
return video_output, format_success_message("Video generated", f"using {model_name} on {provider}")
|
98 |
|
99 |
except ConnectionError as e:
|
100 |
+
error_msg = f"Cannot connect to AI-Inferoxy server: {str(e)}"
|
101 |
print(f"π Video connection error: {error_msg}")
|
102 |
if token_id:
|
103 |
report_token_status(token_id, "error", error_msg, api_key=proxy_api_key, client_name=client_name)
|