--- license: apache-2.0 datasets: - Shravanig/fire_detection_final language: - en base_model: - google/siglip2-base-patch16-512 pipeline_tag: image-classification library_name: transformers tags: - Forest-Fire-Detection - SigLIP2 - climate - Smoke - Normal - Fire --- ![4.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/E4Cd-Kbj9wUkI9t_UOqE8.png) # Forest-Fire-Detection > `Forest-Fire-Detection` is a vision-language encoder model fine-tuned from `google/siglip2-base-patch16-512` for **multi-class image classification**. It is trained to detect whether an image contains **fire**, **smoke**, or a **normal** (non-fire) scene. The model uses the `SiglipForImageClassification` architecture. > [!note] SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features : https://arxiv.org/pdf/2502.14786 ```py Classification Report: precision recall f1-score support Fire 0.9960 0.9896 0.9928 2020 Normal 0.9902 0.9960 0.9931 2020 Smoke 0.9995 1.0000 0.9998 2020 accuracy 0.9952 6060 macro avg 0.9952 0.9952 0.9952 6060 weighted avg 0.9952 0.9952 0.9952 6060 ``` ![download (1).png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/xbB-O5F_pT10R9rLah_R3.png) --- ## Label Space: 3 Classes ``` Class 0: Fire Class 1: Normal Class 2: Smoke ``` --- ## Install Dependencies ```bash pip install -q transformers torch pillow gradio hf_xet ``` --- ## Inference Code ```python import gradio as gr from transformers import AutoImageProcessor, SiglipForImageClassification from PIL import Image import torch # Load model and processor model_name = "prithivMLmods/Forest-Fire-Detection" # Update with actual model name on Hugging Face model = SiglipForImageClassification.from_pretrained(model_name) processor = AutoImageProcessor.from_pretrained(model_name) # Updated label mapping id2label = { "0": "Fire", "1": "Normal", "2": "Smoke" } def classify_image(image): image = Image.fromarray(image).convert("RGB") inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist() prediction = { id2label[str(i)]: round(probs[i], 3) for i in range(len(probs)) } return prediction # Gradio Interface iface = gr.Interface( fn=classify_image, inputs=gr.Image(type="numpy"), outputs=gr.Label(num_top_classes=3, label="Forest Fire Detection"), title="Forest-Fire-Detection", description="Upload an image to detect whether the scene contains fire, smoke, or is normal." ) if __name__ == "__main__": iface.launch() ``` --- ## Intended Use `Forest-Fire-Detection` is designed for: * **Wildfire Monitoring** – Rapid identification of forest fire and smoke zones. * **Environmental Protection** – Surveillance of forest areas for early fire warning. * **Disaster Management** – Support in emergency response and evacuation decisions. * **Smart Surveillance** – Integrate with drones or camera feeds for automated fire detection. * **Research and Analysis** – Analyze visual datasets for fire-prone region identification.