yeonseok-zeticai commited on
Commit
67fe046
Β·
verified Β·
1 Parent(s): e8daec3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +129 -7
README.md CHANGED
@@ -1,10 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
- title: README
3
- emoji: πŸ’»
4
- colorFrom: green
5
- colorTo: indigo
6
- sdk: static
7
- pinned: false
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  ---
9
 
10
- Edit this `README.md` markdown file to author your organization card.
 
 
1
+ # πŸ›°οΈ ZETIC.ai β€” On-Device AI for Every Device
2
+
3
+ **Build. Deploy. Run. Anywhere.**
4
+ ZETIC.ai helps AI engineers deploy models on *any* mobile device β€” without cloud GPU servers.
5
+ We transform your existing AI models into **NPU-optimized, on-device runtimes** in **under 24 hours**.
6
+
7
+ ---
8
+
9
+ ## πŸš€ What We Do
10
+
11
+ **ZETIC.MLange** β€” our core platform β€” enables **serverless AI** by:
12
+ - **Automated Conversion**: Convert your PyTorch, ONNX, or TFLite model into a device-specific NPU library.
13
+ - **Peak Performance**: Up to **60Γ— faster** than GPU cloud inference, with zero accuracy loss.
14
+ - **Broad Compatibility**: Supports Android, iOS, Linux; MediaTek, Qualcomm, Apple NPUs β€” more coming soon.
15
+ - **End-to-End SDK**: From model optimization to app integration β€” no extra engineering required.
16
+
17
+ ---
18
+
19
+ ## πŸ›  Key Features
20
+
21
+ - **Zero GPU Costs** β€” Replace expensive GPU cloud servers with *free* NPU power in devices.
22
+ - **Full Privacy & Security** β€” Data never leaves the device.
23
+ - **Ultra-Low Latency** β€” Real-time AI experiences, even offline.
24
+ - **Cross-Platform** β€” One model β†’ All devices β†’ Same performance.
25
+
26
+ ---
27
+
28
+ ## πŸ“¦ Example Use Cases
29
+
30
+ - πŸŽ™ **Speech Recognition (Whisper)** β€” Real-time, offline transcription on mobile.
31
+ - 🦷 **Dental AI Diagnostics** β€” Instant tooth condition analysis via smartphone camera.
32
+ - 🏌️ **Sports AI** β€” On-device golf swing analytics.
33
+ - πŸ€– **On-Device LLMs** β€” Chat & reasoning models running entirely offline.
34
+
35
+ ---
36
+
37
+ ## πŸ“Š Benchmarks
38
+
39
+ | Device | Task | Cloud GPU | On-Device NPU | Speedup |
40
+ |--------|------|-----------|---------------|---------|
41
+ | iPhone 16 Pro | Whisper-Small | 1.2s | 0.07s | **Γ—17** |
42
+ | Galaxy S24 Ultra | LLaMA-3-8B | 2.4s/token | 0.09s/token | **Γ—26** |
43
+
44
+ [πŸ”— See more benchmarks Β»](https://mlange.zetic.ai)
45
+
46
+
47
+ ### YOLOv8n β€” NPU Latency (ms)
48
+ | Index | Device | Manufacturer | CPU | GPU | CPU/GPU | NPU |
49
+ |-------|--------|--------------|-----|-----|---------|-----|
50
+ | 1 | Google Pixel 7 Pro | Google | 205.43 | - | 404.93 | - |
51
+ | 2 | Apple iPhone 16 | Apple | 126.27 | - | 8.98 | **2.03** |
52
+ | 3 | Apple iPhone 16 Pro | Apple | 122.23 | - | 7.54 | **1.69** |
53
+ | 4 | Apple iPad (2022) | Apple | 167.97 | - | 19.26 | **4.74** |
54
+ | 5 | Apple iPhone 14 | Apple | 133.21 | - | 17.13 | **4.68** |
55
+ | ... | ... | ... | ... | ... | ... | ... |
56
+
57
  ---
58
+
59
+ ### Whisper-tiny-encoder β€” NPU Latency (ms)
60
+ | Index | Device | Manufacturer | CPU | GPU | CPU/GPU | NPU |
61
+ |-------|--------|--------------|-----|-----|---------|-----|
62
+ | 1 | Apple iPhone 16 | Apple | 552.13 | - | 44.49 | **19.01** |
63
+ | 2 | Apple iPhone 16 Pro | Apple | 540.04 | - | 42.43 | **18.15** |
64
+ | 3 | Apple iPhone 16 Pro Max | Apple | 526.12 | - | 40.72 | **17.92** |
65
+ | 4 | Apple iPhone 15 | Apple | 605.12 | - | 48.88 | **18.23** |
66
+ | 5 | Apple iPhone 15 Plus | Apple | 632.67 | - | 49.20 | **23.35** |
67
+ | ... | ... | ... | ... | ... | ... | ... |
68
+
69
+
70
+ - **You can get runtime source code and benchmark report of your model with [ZETIC.MLange](https://mlange.zetic.ai)**
71
+
72
+ ---
73
+
74
+ ## πŸ‘¨πŸ»β€πŸ’» Plug-and-play To Your App
75
+
76
+ - The runtime SDK is also provided for your AI model with ZETIC.MLange
77
+
78
+ - **iOS Integration** (Swift)
79
+ ```
80
+ import ZeticMLange
81
+
82
+ // Load NPU-optimized model
83
+ let model = try MLangeModel.load(from: "whisper_small_npu.mlange")
84
+
85
+ // Prepare input
86
+ let inputAudio = loadAudioTensor("sample.wav") // implement your loader
87
+
88
+ // Run inference
89
+ let output = try model.run(input: inputAudio)
90
+
91
+ print("Transcription: \(output.text)")
92
+ ```
93
+
94
+ - **Android Integration** (Kotlin, Java)
95
+ ```
96
+ import ai.zetic.mlange.MLangeModel
97
+
98
+ // Load packaged NPU runtime
99
+ val model = MLangeModel.load(context, "llama3_small_npu.mlange")
100
+
101
+ // Run text generation
102
+ val prompt = "Tell me a short story about a fox."
103
+ val output = model.run(prompt)
104
+
105
+ println("🦊 AI says: $output")
106
+ ```
107
+
108
+
109
+ ## πŸ“₯ Try It Now
110
+
111
+ - **MLange Dashboard**: [https://mlange.zetic.ai](https://mlange.zetic.ai)
112
+ - **Demo Apps**: [App Store](https://apps.apple.com/app/zeticapp/id6739862746) / [Google Play](https://play.google.com/store/apps/details?id=com.zeticai.zeticapp)
113
+
114
+ ---
115
+
116
+ ## 🧭 Supported Targets
117
+
118
+ - **OS**: Android, iOS, Linux
119
+ - **NPUs**: MediaTek, Qualcomm, Apple (more coming)
120
+ - **Frameworks In**: PyTorch, ONNX, TFLite
121
+ - **Artifacts Out**: NPU-optimized runtime libraries + SDK bindings (Kotlin, Java, Swift, Flutter, React Native)
122
+
123
+ ## πŸ“¬ Contact Us
124
+
125
+ - **Website**: [https://zetic.ai](https://zetic.ai)
126
+ - **Email**: [email protected]
127
+ - **LinkedIn**: [linkedin.com/company/zetic-ai](https://linkedin.com/company/zetic-ai)
128
+
129
  ---
130
 
131
+ **ZETIC.ai** β€” AI for All, Anytime, Anywhere.
132
+ Run your AI where it matters: **on the device.**