boltuix commited on
Commit
ad3b0c4
Β·
verified Β·
1 Parent(s): d003f41

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -7,7 +7,7 @@ datasets:
7
  - sentence-transformers/all-nli
8
  language:
9
  - en
10
- new_version: v1.1
11
  base_model:
12
  - google-bert/bert-base-uncased
13
  pipeline_tag: text-classification
@@ -47,7 +47,7 @@ metrics:
47
  library_name: transformers
48
  ---
49
 
50
- ![Banner](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjiN6PBPuhYCHIccZ928lJmHOOkZXqmQImFyNX8Iu8qrJ0f1tUOJQ5or518FYp8PS_APQqMCjzks0UxyoRkWOAYAbvFs_X8K9bSRpWmB8BzdlJYjRdkDPpqJcARjOifTljhVECYV9g6tIEjqxLhIU3WqUUAEuYD2WUKP3P863pXDvE79qv67P35p04mi00/s16000/NeuroBERT-Mini.jpg)
51
 
52
  # 🧠 boltuix/NeuroBERT-Mini β€” The Ultimate Lightweight NLP Powerhouse! πŸš€
53
 
@@ -82,9 +82,9 @@ Say hello to `NeuroBERT-Mini`, the **game-changing NLP model** that brings **wor
82
 
83
  | Feature | Description |
84
  |------------------------|-------------------------------------------------------|
85
- | πŸ” **Architecture** | Nimble BERT (4 layers, hidden size 256) |
86
- | βš™οΈ **Parameters** | ~11M, quantized to a sleek ~35MB |
87
- | πŸ’Ύ **Model Size** | ~35MBβ€”ideal for edge devices |
88
  | ⚑ **Speed** | Ultra-fast inference (<50ms on edge devices) |
89
  | 🌍 **Use Cases** | NER, intent detection, offline chatbots, voice AI |
90
  | πŸ“š **Datasets** | Wikipedia, BookCorpus, MNLI, All-NLI |
@@ -179,7 +179,7 @@ Input: The capital of France is [MASK].
179
  - **MNLI (MultiNLI)**: Built for natural language inference.
180
  - **All-NLI**: Enhanced with extra NLI data for smarter understanding.
181
 
182
- *Fine-Tuning Brilliance*: Starting from `google-bert/bert-base-uncased` (12 layers, 768 hidden, 110M parameters), NeuroBERT-Mini was fine-tuned to a streamlined 4 layers, 256 hidden, and ~10M parameters, creating a compact yet powerful NLP solution for edge AI! πŸͺ„
183
 
184
  ---
185
 
 
7
  - sentence-transformers/all-nli
8
  language:
9
  - en
10
+ new_version: v1.3
11
  base_model:
12
  - google-bert/bert-base-uncased
13
  pipeline_tag: text-classification
 
47
  library_name: transformers
48
  ---
49
 
50
+ ![Banner](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgs6UMbtcJ-ZILgsgLUgT63wj2g6oQh4p_c1_rMSTdHz7bVTE3TsQl3eLCIiW3NYAa40HZEhniWjrImtW3tvy2WCsDjZTeovB1QUnM-UjYs5tX-e33B9jpmmDXM547V-KBLySAUtKNtiQqceMQwXFHJHLMX8DKjvPx-n9eUJTGmxIaN6-tifIe-gz4dUGk/s4000/NeuroBERT-Mini.jpg)
51
 
52
  # 🧠 boltuix/NeuroBERT-Mini β€” The Ultimate Lightweight NLP Powerhouse! πŸš€
53
 
 
82
 
83
  | Feature | Description |
84
  |------------------------|-------------------------------------------------------|
85
+ | πŸ” **Architecture** | Nimble BERT (8 layers, hidden size 256) |
86
+ | βš™οΈ **Parameters** | ~30M, quantized to a sleek ~50MB |
87
+ | πŸ’Ύ **Model Size** | ~50MBβ€”ideal for edge devices |
88
  | ⚑ **Speed** | Ultra-fast inference (<50ms on edge devices) |
89
  | 🌍 **Use Cases** | NER, intent detection, offline chatbots, voice AI |
90
  | πŸ“š **Datasets** | Wikipedia, BookCorpus, MNLI, All-NLI |
 
179
  - **MNLI (MultiNLI)**: Built for natural language inference.
180
  - **All-NLI**: Enhanced with extra NLI data for smarter understanding.
181
 
182
+ *Fine-Tuning Brilliance*: Starting from `google-bert/bert-base-uncased` (12 layers, 768 hidden, 110M parameters), NeuroBERT-Mini was fine-tuned to a streamlined 8 layers, 256 hidden, and ~30M parameters, creating a compact yet powerful NLP solution for edge AI! πŸͺ„
183
 
184
  ---
185