--- license: apache-2.0 datasets: - boltuix/conll2025-ner language: - en metrics: - precision - recall - f1 - accuracy pipeline_tag: token-classification library_name: transformers new_version: v1.1 tags: - token-classification - ner - named-entity-recognition - text-classification - sequence-labeling - transformer - bert - nlp - pretrained-model - dataset-finetuning - deep-learning - huggingface - conll2025 - real-time-inference - efficient-nlp - high-accuracy - gpu-optimized - chatbot - information-extraction - search-enhancement - knowledge-graph - legal-nlp - medical-nlp - financial-nlp base_model: - boltuix/NeuroBERT-Mini --- ![Banner](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSxp6k0-d8GrfNCp80SdfFWnfcVkxXoJ7hOp-m0L4RwFbhebprSmNE03rbEs02U3RC9Y48ehn5B1t4FJJRlTdqMhbxOTbPtx4sRQwBrsNI0zW_LtFMSjVpO0rom30Ozej7gh6uwokBexDW7FxAoEmfnb69zzn89SAGjU0IVmuppl6f5ahkptZBpi3UWT0/s16000/NER.jpg) # 🌟 NeuroBERT-NER Model 🌟 ## πŸš€ Model Details ### 🌈 Description The `boltuix/NeuroBERT-NER` model is a fine-tuned transformer for **Named Entity Recognition (NER)**, built on the `boltuix/NeuroBERT-Mini` base model. It excels at identifying 36 entity types (e.g., people, places, organizations, dates, money) in English text, making it ideal for applications like information extraction, chatbots, and knowledge graph construction. - **Dataset**: [boltuix/conll2025-ner](https://huggingface.co/datasets/boltuix/conll2025-ner) (143,709 entries, 6.38 MB) - **Entity Types**: 36 NER tags (18 entity categories with B-/I- tags + O) - **Training Examples**: ~115,812 | **Validation**: ~15,680 | **Test**: ~12,217 *Note*: Split sizes are approximate and don’t sum to 143,709; verify with dataset analysis. - **Domains**: News, user-generated content, research corpora - **Tasks**: Sentence-level and document-level NER - **Version**: v1.1 > **Note**: The dataset link is a placeholder. Replace with the correct Hugging Face repository URL once available. ### πŸ”§ Info - **Developer**: Boltuix πŸ§™β€β™‚οΈ - **License**: Apache-2.0 πŸ“œ - **Language**: English πŸ‡¬πŸ‡§ - **Type**: Transformer-based Token Classification πŸ€– - **Trained**: Before May 28, 2025 - **Base Model**: `boltuix/NeuroBERT-Mini` - **Parameters**: ~11M ### πŸ”— Links - **Model Repository**: [boltuix/NeuroBERT-NER](https://huggingface.co/boltuix/NeuroBERT-NER) (placeholder, update with correct URL) - **Dataset**: [boltuix/conll2025-ner](#download-instructions) (placeholder, update with correct URL) - **Hugging Face Docs**: [Transformers](https://huggingface.co/docs/transformers) - **Demo**: Coming Soon --- ## 🎯 Use Cases for NER ### 🌟 Direct Applications - **Information Extraction**: Extract names (πŸ‘€ PERSON), locations (🌍 GPE), and dates (πŸ—“οΈ DATE) from news, blogs, and reports. - **Chatbots & Virtual Assistants**: Enhance contextual awareness by recognizing entities in user queries. - **Search Enhancement**: Power semantic search with entity-based indexing (e.g., β€œarticles mentioning Tokyo in 2025”). - **Knowledge Graphs**: Build structured graphs linking entities like 🏒 ORG and πŸ‘€ PERSON. ### 🌱 Downstream Tasks - **Domain Adaptation**: Fine-tune for medical 🩺, legal πŸ“œ, or financial πŸ’Έ NER. - **Multilingual Extensions**: Retrain for non-English languages. - **Custom Entities**: Adapt for finance (e.g., stock tickers), e-commerce (e.g., product SKUs), or other specialized domains. ### ❌ Limitations - **English-Only**: Out-of-the-box support is limited to English text. - **Domain Bias**: Trained on `boltuix/conll2025-ner`, which may emphasize news and formal text, potentially underperforming on informal, social media, or code-mixed text. - **Generalization**: May struggle with low-resource or highly contextual entities not well-represented in the dataset. --- ![Banner](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJ-8ovFjLbR_4lap-DlMaTLAuLjLhojV4LC0nUaAosL7q4bTGqBSHJZ3lCbpKyb7SmJ71bUOltf35yPAaHA9xz3a8QnhRGxsHiiaNxCbjBIJSv-i37WJngr9hrRdKEH4cKtH-YiVuFjSywXWpn3hQXMrm3OmmBwMD-M2vxMF2-fXXREBeI0GnAJn5uXEc/s4000/NER.jpg) ## πŸ› οΈ Getting Started ### πŸ§ͺ Inference Code Use the model for NER with the following Python code: ```python from transformers import AutoTokenizer, AutoModelForTokenClassification import torch # Load model and tokenizer tokenizer = AutoTokenizer.from_pretrained("boltuix/NeuroBERT-NER") model = AutoModelForTokenClassification.from_pretrained("boltuix/NeuroBERT-NER") # Input text text = "Barack Obama visited Microsoft headquarters in Seattle on January 2025." inputs = tokenizer(text, return_tensors="pt") # Run inference with torch.no_grad(): outputs = model(**inputs) predictions = outputs.logits.argmax(dim=-1) # Map predictions to labels tokens = tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]) label_map = model.config.id2label labels = [label_map[p.item()] for p in predictions[0]] # Print results for token, label in zip(tokens, labels): if token not in tokenizer.all_special_tokens: print(f"{token:15} β†’ {label}") ``` ### ✨ Example Output ``` Barack β†’ B-PERSON Obama β†’ I-PERSON visited β†’ O Microsoft β†’ B-ORG headquarters β†’ O in β†’ O Seattle β†’ B-GPE on β†’ O January β†’ B-DATE 2025 β†’ I-DATE . β†’ O ``` ### πŸ› οΈ Requirements ```bash pip install transformers torch pandas pyarrow ``` - **Python**: 3.8+ - **Storage**: ~50 MB for model weights - **Optional**: `seqeval` for evaluation, `cuda` for GPU acceleration --- ## 🧠 Entity Labels The model supports 36 NER tags from the `boltuix/conll2025-ner` dataset, using the **BIO tagging scheme**: - **B-**: Beginning of an entity - **I-**: Inside of an entity - **O**: Outside of any entity | Tag Name | Purpose | Emoji | |------------------|--------------------------------------------------------------------------|--------| | O | Outside of any named entity (e.g., "the", "is") | 🚫 | | B-CARDINAL | Beginning of a cardinal number (e.g., "1000") | πŸ”’ | | B-DATE | Beginning of a date (e.g., "January") | πŸ—“οΈ | | B-EVENT | Beginning of an event (e.g., "Olympics") | πŸŽ‰ | | B-FAC | Beginning of a facility (e.g., "Eiffel Tower") | πŸ›οΈ | | B-GPE | Beginning of a geopolitical entity (e.g., "Tokyo") | 🌍 | | B-LANGUAGE | Beginning of a language (e.g., "Spanish") | πŸ—£οΈ | | B-LAW | Beginning of a law or legal document (e.g., "Constitution") | πŸ“œ | | B-LOC | Beginning of a non-GPE location (e.g., "Pacific Ocean") | πŸ—ΊοΈ | | B-MONEY | Beginning of a monetary value (e.g., "$100") | πŸ’Έ | | B-NORP | Beginning of a nationality/religious/political group (e.g., "Democrat") | 🏳️ | | B-ORDINAL | Beginning of an ordinal number (e.g., "first") | πŸ₯‡ | | B-ORG | Beginning of an organization (e.g., "Microsoft") | 🏒 | | B-PERCENT | Beginning of a percentage (e.g., "50%") | πŸ“Š | | B-PERSON | Beginning of a person’s name (e.g., "Elon Musk") | πŸ‘€ | | B-PRODUCT | Beginning of a product (e.g., "iPhone") | πŸ“± | | B-QUANTITY | Beginning of a quantity (e.g., "two liters") | βš–οΈ | | B-TIME | Beginning of a time (e.g., "noon") | ⏰ | | B-WORK_OF_ART | Beginning of a work of art (e.g., "Mona Lisa") | 🎨 | | I-CARDINAL | Inside of a cardinal number (e.g., "000" in "1000") | πŸ”’ | | I-DATE | Inside of a date (e.g., "2025" in "January 2025") | πŸ—“οΈ | | I-EVENT | Inside of an event name | πŸŽ‰ | | I-FAC | Inside of a facility name | πŸ›οΈ | | I-GPE | Inside of a geopolitical entity | 🌍 | | I-LANGUAGE | Inside of a language name | πŸ—£οΈ | | I-LAW | Inside of a legal document title | πŸ“œ | | I-LOC | Inside of a location | πŸ—ΊοΈ | | I-MONEY | Inside of a monetary value | πŸ’Έ | | I-NORP | Inside of a NORP entity | 🏳️ | | I-ORDINAL | Inside of an ordinal number | πŸ₯‡ | | I-ORG | Inside of an organization name | 🏒 | | I-PERCENT | Inside of a percentage | πŸ“Š | | I-PERSON | Inside of a person’s name | πŸ‘€ | | I-PRODUCT | Inside of a product name | πŸ“± | | I-QUANTITY | Inside of a quantity | βš–οΈ | | I-TIME | Inside of a time phrase | ⏰ | | I-WORK_OF_ART | Inside of a work of art title | 🎨 | **Example**: Text: `"Microsoft opened in Tokyo on January 2025"` Tags: `[B-ORG, O, O, B-GPE, O, B-DATE, I-DATE]` --- ## πŸ“ˆ Performance Evaluated on the `boltuix/conll2025-ner` test split using `seqeval`: | Metric | Score | |------------|-------| | 🎯 Precision | 0.85 | | πŸ•ΈοΈ Recall | 0.87 | | 🎢 F1 Score | 0.86 | | βœ… Accuracy | 0.92 | *Note*: Scores are based on the test split (~12,217 examples). Performance may vary with different domains or text types. --- ## βš™οΈ Training Setup - **Hardware**: NVIDIA GPU - **Training Time**: ~2 hours - **Parameters**: ~11M - **Optimizer**: AdamW (default settings) - **Precision**: FP32 (no mixed precision) - **Batch Size**: Not specified (assumed default for `transformers`) - **Learning Rate**: Not specified (assumed default for `transformers`) --- ## 🧠 Training the Model Fine-tune the `boltuix/NeuroBERT-Mini` model on the `boltuix/conll2025-ner` dataset to replicate or extend the `NeuroBERT-NER` model. Below is a step-by-step guide with code. ```python # πŸ› οΈ Step 1: Install required libraries quietly !pip install transformers datasets tokenizers seqeval pandas pyarrow -q # 🚫 Step 2: Disable Weights & Biases (WandB) import os os.environ["WANDB_MODE"] = "disabled" # πŸ“š Step 2: Import necessary libraries import pandas as pd import datasets import numpy as np from transformers import BertTokenizerFast from transformers import DataCollatorForTokenClassification from transformers import AutoModelForTokenClassification from transformers import TrainingArguments, Trainer import evaluate from transformers import pipeline from collections import defaultdict import json # πŸ“₯ Step 3: Load the CoNLL-2025 NER dataset from Parquet parquet_file = "/content/conll2025-ner.parquet" df = pd.read_parquet(parquet_file) # πŸ” Step 4: Convert pandas DataFrame to Hugging Face Dataset conll2025 = datasets.Dataset.from_pandas(df) # πŸ”Ž Step 5: Inspect the dataset structure print("Dataset structure:", conll2025) print("Dataset features:", conll2025.features) print("First example:", conll2025[0]) # 🏷️ Step 6: Extract unique tags and create mappings # Since ner_tags are strings, collect all unique tags all_tags = set() for example in conll2025: all_tags.update(example["ner_tags"]) unique_tags = sorted(list(all_tags)) # Sort for consistency num_tags = len(unique_tags) tag2id = {tag: i for i, tag in enumerate(unique_tags)} id2tag = {i: tag for i, tag in enumerate(unique_tags)} print("Number of unique tags:", num_tags) print("Unique tags:", unique_tags) # πŸ”§ Step 7: Convert string ner_tags to indices def convert_tags_to_ids(example): example["ner_tags"] = [tag2id[tag] for tag in example["ner_tags"]] return example conll2025 = conll2025.map(convert_tags_to_ids) # πŸ“Š Step 8: Split dataset based on 'split' column dataset_dict = { "train": conll2025.filter(lambda x: x["split"] == "train"), "validation": conll2025.filter(lambda x: x["split"] == "validation"), "test": conll2025.filter(lambda x: x["split"] == "test") } conll2025 = datasets.DatasetDict(dataset_dict) print("Split dataset structure:", conll2025) # πŸͺ™ Step 9: Initialize the tokenizer tokenizer = BertTokenizerFast.from_pretrained("boltuix/NeuroBERT-Mini") # πŸ“ Step 10: Tokenize an example text and inspect example_text = conll2025["train"][0] tokenized_input = tokenizer(example_text["tokens"], is_split_into_words=True) tokens = tokenizer.convert_ids_to_tokens(tokenized_input["input_ids"]) word_ids = tokenized_input.word_ids() print("Word IDs:", word_ids) print("Tokenized input:", tokenized_input) print("Length of ner_tags vs input IDs:", len(example_text["ner_tags"]), len(tokenized_input["input_ids"])) # πŸ”„ Step 11: Define function to tokenize and align labels def tokenize_and_align_labels(examples, label_all_tokens=True): """ Tokenize inputs and align labels for NER tasks. Args: examples (dict): Dictionary with tokens and ner_tags. label_all_tokens (bool): Whether to label all subword tokens. Returns: dict: Tokenized inputs with aligned labels. """ tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True) labels = [] for i, label in enumerate(examples["ner_tags"]): word_ids = tokenized_inputs.word_ids(batch_index=i) previous_word_idx = None label_ids = [] for word_idx in word_ids: if word_idx is None: label_ids.append(-100) # Special tokens get -100 elif word_idx != previous_word_idx: label_ids.append(label[word_idx]) # First token of word gets label else: label_ids.append(label[word_idx] if label_all_tokens else -100) # Subwords get label or -100 previous_word_idx = word_idx labels.append(label_ids) tokenized_inputs["labels"] = labels return tokenized_inputs # πŸ§ͺ Step 12: Test the tokenization and label alignment q = tokenize_and_align_labels(conll2025["train"][0:1]) print("Tokenized and aligned example:", q) # πŸ“‹ Step 13: Print tokens and their corresponding labels for token, label in zip(tokenizer.convert_ids_to_tokens(q["input_ids"][0]), q["labels"][0]): print(f"{token:_<40} {label}") # πŸ”§ Step 14: Apply tokenization to the entire dataset tokenized_datasets = conll2025.map(tokenize_and_align_labels, batched=True) # πŸ€– Step 15: Initialize the model with the correct number of labels model = AutoModelForTokenClassification.from_pretrained("boltuix/NeuroBERT-Mini", num_labels=num_tags) # βš™οΈ Step 16: Set up training arguments args = TrainingArguments( "boltuix/bert-ner", eval_strategy="epoch", # Changed evaluation_strategy to eval_strategy learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=1, weight_decay=0.01, report_to="none" ) # πŸ“Š Step 17: Initialize data collator for dynamic padding data_collator = DataCollatorForTokenClassification(tokenizer) # πŸ“ˆ Step 18: Load evaluation metric metric = evaluate.load("seqeval") # 🏷️ Step 19: Set label list and test metric computation label_list = unique_tags print("Label list:", label_list) example = conll2025["train"][0] labels = [label_list[i] for i in example["ner_tags"]] print("Metric test:", metric.compute(predictions=[labels], references=[labels])) # πŸ“‰ Step 20: Define function to compute evaluation metrics def compute_metrics(eval_preds): """ Compute precision, recall, F1, and accuracy for NER. Args: eval_preds (tuple): Predicted logits and true labels. Returns: dict: Evaluation metrics. """ pred_logits, labels = eval_preds pred_logits = np.argmax(pred_logits, axis=2) predictions = [ [label_list[p] for (p, l) in zip(prediction, label) if l != -100] for prediction, label in zip(pred_logits, labels) ] true_labels = [ [label_list[l] for (p, l) in zip(prediction, label) if l != -100] for prediction, label in zip(pred_logits, labels) ] results = metric.compute(predictions=predictions, references=true_labels) return { "precision": results["overall_precision"], "recall": results["overall_recall"], "f1": results["overall_f1"], "accuracy": results["overall_accuracy"], } # πŸš€ Step 21: Initialize and train the trainer trainer = Trainer( model, args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["validation"], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics ) trainer.train() # πŸ’Ύ Step 22: Save the fine-tuned model model.save_pretrained("boltuix/bert-ner") tokenizer.save_pretrained("tokenizer") # πŸ”— Step 23: Update model configuration with label mappings id2label = {str(i): label for i, label in enumerate(label_list)} label2id = {label: str(i) for i, label in enumerate(label_list)} config = json.load(open("boltuix/bert-ner/config.json")) config["id2label"] = id2label config["label2id"] = label2id json.dump(config, open("boltuix/bert-ner/config.json", "w")) # πŸ”„ Step 24: Load the fine-tuned model model_fine_tuned = AutoModelForTokenClassification.from_pretrained("boltuix/bert-ner") # πŸ› οΈ Step 25: Create a pipeline for NER inference nlp = pipeline("token-classification", model=model_fine_tuned, tokenizer=tokenizer) # πŸ“ Step 26: Perform NER on an example sentence example = "On July 4th, 2023, President Joe Biden visited the United Nations headquarters in New York to deliver a speech about international law and donated $5 million to relief efforts." ner_results = nlp(example) print("NER results for first example:", ner_results) # πŸ“ Step 27: Perform NER on a property address and format output example = "This page contains information about the property located at 1275 Kinnear Rd, Columbus, OH, 43212." ner_results = nlp(example) # 🧹 Step 28: Process NER results into structured entities entities = defaultdict(list) current_entity = "" current_type = "" for item in ner_results: entity = item["entity"] word = item["word"] if word.startswith("##"): current_entity += word[2:] # Handle subword tokens elif entity.startswith("B-"): if current_entity and current_type: entities[current_type].append(current_entity.strip()) current_type = entity[2:].lower() current_entity = word elif entity.startswith("I-") and entity[2:].lower() == current_type: current_entity += " " + word # Continue same entity else: if current_entity and current_type: entities[current_type].append(current_entity.strip()) current_entity = "" current_type = "" # Append final entity if exists if current_entity and current_type: entities[current_type].append(current_entity.strip()) # πŸ“€ Step 29: Output the final JSON final_json = dict(entities) print("Structured NER output:") print(json.dumps(final_json, indent=2)) ``` ### πŸ› οΈ Tips - **Hyperparameters**: Adjust `learning_rate` (e.g., 1e-5 to 5e-5), `batch_size` (8-32), or `num_train_epochs` (2-5) based on performance. - **GPU Usage**: Enable `fp16=True` for faster training on NVIDIA GPUs. - **Dataset Splits**: Verify split sizes with `dataset.num_rows` to ensure accuracy. - **Custom Data**: Adapt the preprocessing script for custom NER datasets by updating `label_list`. ### ⏱️ Expected Training Time - ~2 hours on an NVIDIA GPU (e.g., V100 or A100) for ~115,812 training examples, 3 epochs, batch size 16. - CPU training is possible but may take significantly longer (e.g., 6-12 hours). ### 🌍 Carbon Impact Training on a single GPU for ~2 hours emits ~50g COβ‚‚eq (based on ML Impact tool). Use efficient hardware or cloud regions with renewable energy to minimize impact. --- ## 🌍 Carbon Impact - **Training Location**: Local (Boltuix’s base) - **Region**: Not specified - **Emissions**: ~50g COβ‚‚eq - **Measurement**: ML Impact tool --- ## πŸ› οΈ Installation Install dependencies: ```bash pip install transformers torch pandas pyarrow seqeval ``` - **Python**: 3.8+ - **Storage**: ~50 MB for model, ~6.38 MB for dataset - **Optional**: NVIDIA CUDA for GPU acceleration ### Download Instructions πŸ“₯ - **Model**: Access from [boltuix/NeuroBERT-NER](https://huggingface.co/boltuix/NeuroBERT-NER) (placeholder, update with correct URL). - **Dataset**: Access from [boltuix/conll2025-ner](https://huggingface.co/datasets/boltuix/conll2025-ner) (placeholder, update with correct URL). - Load with Hugging Face `datasets` or pandas. > **Note**: Model and dataset links are placeholders. Replace with correct Hugging Face URLs once available. --- ## πŸ§ͺ Evaluation Code Evaluate the model on your own data: ```python from transformers import AutoTokenizer, AutoModelForTokenClassification from seqeval.metrics import classification_report import torch # Load model and tokenizer tokenizer = AutoTokenizer.from_pretrained("boltuix/NeuroBERT-NER") model = AutoModelForTokenClassification.from_pretrained("boltuix/NeuroBERT-NER") # Sample test data texts = ["Barack Obama visited Microsoft in Seattle on January 2025."] true_labels = [["B-PERSON", "I-PERSON", "O", "B-ORG", "O", "B-GPE", "O", "B-DATE", "I-DATE", "O"]] pred_labels = [] for text in texts: inputs = tokenizer(text, return_tensors="pt", is_split_into_words=False, return_attention_mask=True) with torch.no_grad(): outputs = model(**inputs) predictions = outputs.logits.argmax(dim=-1)[0].cpu().numpy() tokens = tokenizer.convert_ids_to_tokens(inputs["input_ids"][0]) word_ids = inputs.word_ids(batch_index=0) # Align prediction to word level (first token of each word) word_preds = [] previous_word_idx = None for idx, word_idx in enumerate(word_ids): if word_idx is None or word_idx == previous_word_idx: continue # Skip special tokens and subwords label = model.config.id2label[predictions[idx]] word_preds.append(label) previous_word_idx = word_idx pred_labels.append(word_preds) # Evaluate print("Predicted:", pred_labels) print("True :", true_labels) print("\nπŸ“Š Evaluation Report:\n") print(classification_report(true_labels, pred_labels)) ``` --- ## 🌱 Dataset Details The model was fine-tuned on the `boltuix/conll2025-ner` dataset: - **Entries**: 143,709 - **Size**: 6.38 MB (Parquet format) - **Columns**: `split`, `tokens`, `ner_tags` - **Splits**: Train (~115,812) - **NER Tags**: 36 (18 entity types with B-/I- tags + O) - **Source**: Curated from news, user-generated content, and research corpora - **Annotations**: Expert-labeled for high accuracy --- ## πŸ“Š Visualizing NER Tags Visualize the tag distribution in `boltuix/conll2025-ner`. The chart below uses estimated counts, as exact counts are unavailable. Use the Python script to compute actual counts. **Python Script for Actual Counts**: ```python import pandas as pd from collections import Counter import matplotlib.pyplot as plt # Load dataset df = pd.read_parquet("conll2025_ner.parquet") # Flatten ner_tags all_tags = [tag for tags in df["ner_tags"] for tag in tags] tag_counts = Counter(all_tags) # Plot plt.figure(figsize=(12, 7)) plt.bar(tag_counts.keys(), tag_counts.values(), color="#36A2EB") plt.title("CoNLL 2025 NER: Tag Distribution", fontsize=16) plt.xlabel("NER Tag", fontsize=12) plt.ylabel("Count", fontsize=12) plt.xticks(rotation=45, ha="right", fontsize=10) plt.grid(axis="y", linestyle="--", alpha=0.7) plt.tight_layout() plt.savefig("ner_tag_distribution.png") plt.show() ``` --- ## βš–οΈ Comparison to Other Models | Model | Dataset | Parameters | F1 Score | Size | |----------------------|--------------------|------------|----------|--------| | **NeuroBERT-NER** | conll2025-ner | ~11M | 0.86 | ~50 MB | | BERT-base-NER | CoNLL-2003 | ~110M | ~0.89 | ~400 MB| | DistilBERT-NER | CoNLL-2003 | ~66M | ~0.85 | ~200 MB| | spaCy (en_core_web_lg)| OntoNotes | - | ~0.83 | ~800 MB| **Advantages**: - Lightweight (~11M parameters, ~50 MB) - High F1 score (0.86) on `conll2025-ner` - Optimized for real-time inference --- ## 🌐 Community and Support Join the NER community: - πŸ“ Explore the model page (URL TBD, check Hugging Face community at [https://huggingface.co/community](https://huggingface.co/community)) 🌟 - πŸ› οΈ Report issues or contribute at the model repository (URL TBD) πŸ”§ - πŸ’¬ Discuss on Hugging Face forums: [https://huggingface.co/discussions](https://huggingface.co/discussions) πŸ—£οΈ - πŸ“š Learn more via [Hugging Face Transformers docs](https://huggingface.co/docs/transformers) πŸ“– - πŸ“§ Contact: Boltuix at [boltuix@gmail.com](mailto:boltuix@gmail.com) > **Note**: Model and dataset repository URLs are placeholders. Update with correct URLs once available. --- ## ✍️ Contact - **Author**: Boltuix - **Email**: [boltuix@gmail.com](mailto:boltuix@gmail.com) - **Hugging Face**: [boltuix](https://huggingface.co/boltuix) --- ## πŸ“… Last Updated **May 28, 2025** β€” Released v1.1 with fine-tuning on `boltuix/conll2025-ner`, updated performance metrics, and added training guide. **[Get Started Now](#getting-started)** πŸš€