Spaces:
Sleeping
Sleeping
BACKEND_MIGRATION_LOG.md
π Overview
This document tracks the migration of the inference logic from a monolithic Streamlit app to a modular, testable FastAPI backend for the Polymer AI Aging Prediction System
β Completed Work
1. Initial Setup
- Installed
fastapi
,uvicorn
, and set up basic FastAPI app inmain.py
.
2. Modular Inference Utilities
- Moved
load_model()
andrun_inference()
intobackend/inference_utils.py
. - Separated model configuration for Figure2CNN and ResNet1D.
- Applied proper preprocessing (resampling, normalization) inside
run_inference()
.
3. API Endpoint
/infer
route accepts JSON payloads withmodel_name
andspectrum
.- Returns: full prediction dictionary with class index, logits, and label map.
4. Validation + Testing
Tested manually in Python REPL.
Tested via
curl
:curl -X POST -H "Content-Type: application/json" -d @backend/test_payload.json
π Fixes & Breakpoints Resolved
- β Fixed incorrect model path ("models/" β "outputs/")
- β
Corrected unpacking bug in
main.py
β now returns full result dict - β
Replaced invalid
tolist()
call on string-typed logits - β Manually verified output from CLI and curl
π§ͺ Next Focus: Robustness Testing
- Invalid
model_name
handling - Short/empty spectrum validation
- ResNet model loading test
- JSON schema validation for input
- Unit tests via
pytest
or integration test runner
π Future Enhancements
- Modular model registry (for adding more model classes easily)
- Add OpenAPI schema and example payloads for documentation
- Enable batch inference or upload support