hadrilec's picture
readme update + example cleaning
ff3f3bb
metadata
language: en
license: cc-by-4.0
tags:
  - vision
  - image-classification
  - cnn
  - satellite-imagery
  - land-use-classification
  - france
  - remote-sensing
model-index:
  - name: satellite-land-use-classifier-france
    results:
      - task:
          type: image-classification
        dataset:
          name: hadrilec/satellite-pictures-classification-ign-france
          type: image-dataset
        metrics:
          - name: accuracy
            type: classification
            value: 0.9728
        split: validation
widget:
  - task: image-classification
    inputs:
      - name: example_input
        type: image
        url: >-
          https://huggingface.co/datasets/hadrilec/satellite-pictures-classification-ign-france/viewer/default/train?views%5B%5D=train&image-viewer=9473D066F7CE8EC28648894FD2FDAAF273ED2520
model:
  architecture: Custom CNN (2-layer Conv + 2-FC)
  framework: PyTorch
  input_shape:
    - 3
    - 256
    - 256
  output_labels:
    - forest
    - sea
    - urban
    - field
model_description: >
  This model uses a custom convolutional neural network, trained on satellite
  images provided by IGN, to classify areas in France into four categories:
  forest, sea, urban, and field
citation:
  - authors:
      - Hadrien Leclerc
    title: Satellite Image Classifier for Land-Use in France
    year: 2025
datasets:
  - hadrilec/satellite-pictures-classification-ign-france
metrics:
  - accuracy

Satellite Image Classifier (Custom CNN)

This repository contains a custom convolutional neural network trained on satellite imagery for land classification in France. The model is inspired by the foundational book Deep Learning with Pytorch, Eli Stevens, Luca Antiga, Thomas Viehmann

Dataset: hadrilec/satellite-pictures-classification-ign-france


πŸ— Model Architecture

class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(3, 16, kernel_size=3, padding=1)
        self.conv2 = nn.Conv2d(16, 8, kernel_size=3, padding=1)
        self.fc1 = nn.Linear(8 * 64 * 64, 32)
        self.fc2 = nn.Linear(32, 4)

    def forward(self, x):
        out = F.max_pool2d(torch.tanh(self.conv1(x)), 2)
        out = F.max_pool2d(torch.tanh(self.conv2(out)), 2)
        out = out.view(-1, 8 * 64 * 64)
        out = torch.tanh(self.fc1(out))
        out = self.fc2(out)
        return out

Example Notebook

We provide an example notebook that demonstrates how to train the model:

Open the example notebook

This notebook is intended as a starting point for experimentation and helps you quickly see how to use the dataset in practice.

Accuracy is 0.9728

IT infrastructure

The model has been trained on Onyxia datalab platform, with an NVIDIA GPU Tesla T4.

πŸ“ˆ Training & Validation Loss

Below is the training vs. validation loss curve:

Training vs Validation Loss


πŸ–Ό Misclassified Samples

Here is a facet plot showing some misclassified images from the validation set:

Misclassified Samples


πŸ–Ό Correctly Classified Samples

Here is a facet plot showing some well-classified images from the validation set:

Correctly Classified Samples