Text Classification
Transformers
Safetensors
English
bert
fill-mask
BERT
transformer
nlp
bert-lite
edge-ai
low-resource
micro-nlp
quantized
iot
wearable-ai
offline-assistant
intent-detection
real-time
smart-home
embedded-systems
command-classification
toy-robotics
voice-ai
eco-ai
english
lightweight
mobile-nlp
ner
on-device-nlp
privacy-first
cpu-inference
speech-intent
offline-nlp
tiny-bert
bert-variant
efficient-nlp
edge-ml
tiny-ml
aiot
embedded-nlp
low-latency
smart-devices
edge-inference
ml-on-microcontrollers
android-nlp
offline-chatbot
esp32-nlp
tflite-compatible
Update README.md
Browse files
README.md
CHANGED
@@ -33,6 +33,26 @@ tags:
|
|
33 |
- lightweight
|
34 |
- mobile-nlp
|
35 |
- ner
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
metrics:
|
37 |
- accuracy
|
38 |
- f1
|
@@ -43,7 +63,7 @@ library_name: transformers
|
|
43 |
|
44 |

|
45 |
|
46 |
-
# 🧠 BERT-Lite
|
47 |
|
48 |
[](https://opensource.org/licenses/MIT)
|
49 |
[](#)
|
@@ -72,7 +92,11 @@ library_name: transformers
|
|
72 |
|
73 |
## Overview
|
74 |
|
75 |
-
|
|
|
|
|
|
|
|
|
76 |
|
77 |
- **Model Name**: BERT-Lite
|
78 |
- **Size**: ~10MB (quantized)
|
|
|
33 |
- lightweight
|
34 |
- mobile-nlp
|
35 |
- ner
|
36 |
+
- on-device-nlp
|
37 |
+
- privacy-first
|
38 |
+
- cpu-inference
|
39 |
+
- speech-intent
|
40 |
+
- offline-nlp
|
41 |
+
- tiny-bert
|
42 |
+
- bert-variant
|
43 |
+
- efficient-nlp
|
44 |
+
- edge-ml
|
45 |
+
- tiny-ml
|
46 |
+
- aiot
|
47 |
+
- embedded-nlp
|
48 |
+
- low-latency
|
49 |
+
- smart-devices
|
50 |
+
- edge-inference
|
51 |
+
- ml-on-microcontrollers
|
52 |
+
- android-nlp
|
53 |
+
- offline-chatbot
|
54 |
+
- esp32-nlp
|
55 |
+
- tflite-compatible
|
56 |
metrics:
|
57 |
- accuracy
|
58 |
- f1
|
|
|
63 |
|
64 |

|
65 |
|
66 |
+
# 🧠 BERT-Lite : Ultra-Lightweight BERT for Edge & IoT Efficiency 🚀
|
67 |
|
68 |
[](https://opensource.org/licenses/MIT)
|
69 |
[](#)
|
|
|
92 |
|
93 |
## Overview
|
94 |
|
95 |
+
**BERT-Lite** is an **ultra-lightweight**, general-purpose NLP model derived from [google/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased), designed for **real-time inference** in highly constrained environments such as **edge devices, microcontrollers, and smart home systems**.
|
96 |
+
|
97 |
+
With a quantized size of just **~10MB** and **~2M parameters**, BERT-Lite enables efficient **contextual language understanding** for both **general NLP tasks** and **resource-sensitive applications**.
|
98 |
+
|
99 |
+
Whether you're building a privacy-first mobile app, an offline assistant, or a smart IoT device, BERT-Lite offers fast, accurate NLP performance without relying on cloud services.
|
100 |
|
101 |
- **Model Name**: BERT-Lite
|
102 |
- **Size**: ~10MB (quantized)
|