Spaces:
Sleeping
title: AI Polymer Classification
emoji: π¬
colorFrom: indigo
colorTo: green
sdk: streamlit
app_file: app.py
pinned: false
license: apache-2.0
π¬ AI-Driven Polymer Aging Prediction and Classification System
A research project developed as part of AIRE 2025. This system applies deep learning to spectral data to classify polymer aging a critical proxy for recyclability using a fully reproducible and modular ML pipeline.
The broader research vision is a multi-modal evaluation platform, benchmarking not only Raman spectra but also image-based models and FTIR spectral data, ensuring reproducibility, extensibility, and scientific rigor.
π― Project Objective
Build a validated machine learning system for classifying polymer spectra (predict degradation levels as a proxy for recyclability)
Evaluate and compare multiple CNN architectures, beginning with Figure2CNN and ResNet variants, and expand to additional trained models.
Ensure scientific reproducibility through structured diaignostics and artifact control
Support sustainability and circular materials research through spectrum-based classification.
Reference (for Figure2CNN baseline):
Neo, E.R.K., Low, J.S.C., Goodship, V., Debattista, K. (2023). Deep learning for chemometric analysis of plastic spectral data from infrared and Raman databases. Resources, Conservation & Recycling, 188, 106718. https://doi.org/10.1016/j.resconrec.2022.106718
π§ Model Architectures
Model | Description |
---|---|
Figure2CNN |
Baseline model from literature |
ResNet1D |
Deeper candidate model with skip connections |
ResNet18Vision |
Image-focused CNN architecture, retrained on polymer dataset (roadmap) |
Future expansions will add additional trained CNNs, supporting direct benchmarking and comparative reporting.
π Project Structure (Cleaned and Current)
ml-polymer-recycling/
βββ datasets/
βββ models/ # Model architectures
βββ scripts/ # Training, inference, utilities
βββ outputs/ # Artifacts: models, logs, plots
βββ docs/ # Documentation & reports
βββ environment.yml # (local) Conda execution environment
β Current Status
Track | Status | Test Accuracy |
---|---|---|
Raman | β Active & validated | 87.81% Β± 7.59% |
Image | π§ Planned Expansion | N/A |
FTIR | βΈοΈ Deferred/Modularized | N/A |
π¬ Key Features
- β 10-Fold Stratified Cross-Validation
- β
CLI Training:
train_model.py
- β
CLI Inference
run_inference.py
- β Output artifact naming per model
- β Raman-only preprocessing with baseline correction, smoothing, normalization
- β Structured diagnostics JSON (accuracies, confusion matrices)
- β
Canonical validation script (
validate_pipeline.sh
) confirms reproducibility of all core components
Environments:
# Local
git checkout main
conda env create -f environment.yml
conda activate polymer_env
# HPC
git checkout hpc-main
conda env create -f environment_hpc.yml
conda activate polymer_env
π Sample Training & Inference
Training (10-Fold CV)
python scripts/train_model.py --model resnet --target-len 4000 --baseline --smooth --normalize
Inference (Raman)
python scripts/run_inference.py --target-len 4000
--input datasets/rdwp/sample123.txt --model outputs/resnet_model.pth
--output outputs/inference/prediction.txt
Inference Output Example:
Predicted Label: 1 True Label: 1
Raw Logits: [[-569.544, 427.996]]
Validation Script (Raman Pipeline)
./validate_pipeline.sh
# Runs preprocessing, training, inference, and plotting checks
# Confirms artifact integrity and logs test results
π Dataset Resources
Type | Dataset | Source |
---|---|---|
Raman | RDWP | A Raman database of microplastics weathered under natural environments |
| Datasets should be downloaded separately and placed here:
datasets/
βββ rdwp/
βββ sample1.txt
βββ sample2.txt
βββ ...
These files are intentionally excluded from version control via .gitignore
π Dependencies
Python 3.10+
Conda, Git
PyTorch (CPU & CUDA)
Numpy, SciPy, Pandas
Scikit-learn
Matplotlib, Seaborn
ArgParse, JSON
π§βπ€βπ§ Contributors
- Dr. Sanmukh Kuppannagari β Research Mentor
- Dr. Metin Karailyan β Research Mentor
- Jaser H. β AIRE 2025 Intern, Developer
π― Strategic Expansion Objectives (Roadmap)
The roadmap defines three major expansion paths designed to broaden the systemβs capabilities and impact:
Model Expansion: Multi-Model Dashboard
The dashboard will evolve into a hub for multiple model architectures rather than being tied to a single baseline. Planned work includes:
- Retraining & Fine-Tuning: Incorporating publicly available vision models and retraining them with the polymer dataset.
- Model Registry: Automatically detecting available .pth weights and exposing them in the dashboard for easy selection.
- Side-by-Side Reporting: Running comparative experiments and reporting each modelβs accuracy and diagnostics in a standardized format.
- Reproducible Integration: Maintaining modular scripts and pipelines so each modelβs results can be replicated without conflict.
This ensures flexibility for future research and transparency in performance comparisons.
Image Input Modality
The system will support classification on images as an additional modality, extending beyond spectra. Key features will include:
- Upload Support: Users can upload single images or batches directly through the dashboard.
- Multi-Model Execution: Selected models from the registry can be applied to all uploaded images simultaneously.
- Batch Results: Output will be returned in a structured, accessible way, showing both individual predictions and aggregate statistics.
- Enhanced Feedback: Outputs will include predicted class, model confidence, and potentially annotated image previews.
This expands the system toward a multi-modal framework, supporting broader research workflows.
FTIR Dataset Integration
Although previously deferred, FTIR support will be added back in a modular, distinct fashion. Planned steps are:
- Dedicated Preprocessing: Tailored scripts to handle FTIR-specific signal characteristics (multi-layer handling, baseline correction, normalization).
- Architecture Compatibility: Ensuring existing and retrained models can process FTIR data without mixing it with Raman workflows.
- UI Integration: Introducing FTIR as a separate option in the modality selector, keeping Raman, Image, and FTIR workflows clearly delineated.
- Phased Development: Implementation details to be refined during meetings to ensure scientific rigor.
This guarantees FTIR becomes a supported modality without undermining the validated Raman foundation.
π Guiding Principles
- Preserve the Raman baseline as the reproducible ground truth
- Additive modularity: Models, images, and FTIR added as clean, distinct layers rather than overwriting core functionality
- Transparency & reproducibility: All expansions documented, tested, and logged with clear outputs.
- Future-oriented design: Workflows structured to support ongoing collaboration and successor-safe research.