metadata
title: OpenWB
emoji: π
colorFrom: red
colorTo: red
sdk: docker
app_port: 8501
tags:
- streamlit
pinned: false
short_description: Streamlit template space
π€ OpenWB - Free W&B Alternative
A free, open-source experiment tracking platform hosted on HuggingFace Spaces. Track your ML experiments with beautiful dashboards, all powered by HuggingFace infrastructure.
β¨ Features
- π HuggingFace Authentication - Connect with your HF token
- π Interactive Dashboards - Beautiful charts powered by Plotly
- π Easy API - Simple Python client for logging metrics
- πΎ Free Storage - Uses HuggingFace Hub for data persistence
- π Real-time Updates - Live dashboard updates
- π Multiple Chart Types - Line plots, scatter plots, histograms
- π― Experiment Comparison - Compare multiple runs
- π Configuration Tracking - Store and view experiment configs
π Quick Start
1. Deploy on HuggingFace Spaces
- Go to HuggingFace Spaces
- Choose Docker as SDK
- Select Streamlit template
- Copy all the files from this repository
- Deploy your space
2. Get Your API Key
- Visit your deployed space
- Connect with your HuggingFace token
- Copy your generated API key from the dashboard
3. Install Client Library
pip install requests
4. Start Tracking
from client import MLTracker
# Initialize tracker
tracker = MLTracker(
api_key="your-api-key-here",
base_url="https://your-space-name.hf.space"
)
# Start experiment
tracker.init("my_first_experiment", config={
"model": "ResNet50",
"dataset": "CIFAR-10",
"learning_rate": 0.001,
"batch_size": 32
})
# Log metrics during training
for epoch in range(100):
# Your training code here
loss = train_one_epoch()
accuracy = evaluate_model()
# Log to ML Tracker
tracker.log({
"loss": loss,
"accuracy": accuracy,
"epoch": epoch
})
# Finish experiment
tracker.finish()
π Project Structure
ml-tracker/
βββ Dockerfile # HuggingFace Spaces Docker config
βββ requirements.txt # Python dependencies
βββ app.py # Main Streamlit dashboard
βββ api.py # FastAPI backend (optional)
βββ client.py # Python client library
βββ README.md # This file
π§ Configuration
Environment Variables
You can set these environment variables for easier usage:
export ML_TRACKER_API_KEY="your-api-key"
export ML_TRACKER_BASE_URL="https://your-space-name.hf.space"
HuggingFace Space Settings
In your Space settings, you can:
- Enable/disable public access
- Set custom domain
- Configure hardware (upgrade for better performance)
π‘ Usage Examples
Basic Usage
import mltracker
# Initialize with environment variables
mltracker.init("experiment_name", config={
"model": "BERT",
"dataset": "IMDB"
})
# Log metrics
mltracker.log({"loss": 0.5, "accuracy": 0.85})
mltracker.log({"loss": 0.3, "accuracy": 0.90})
# Finish
mltracker.finish()
Advanced Usage
from client import MLTracker
tracker = MLTracker(api_key="...", base_url="...")
# Multiple experiments
for lr in [0.001, 0.01, 0.1]:
tracker.init(f"lr_{lr}", config={"learning_rate": lr})
for epoch in range(10):
# Training code
loss = train_with_lr(lr)
tracker.log({"loss": loss})
tracker.finish()
# Get experiment data
experiments = tracker.get_experiments()
for exp in experiments:
print(f"Experiment: {exp['experiment']}")
print(f"Steps: {exp['total_steps']}")
PyTorch Integration
import torch
import torch.nn as nn
from client import MLTracker
# Initialize tracker
tracker = MLTracker(api_key="...", base_url="...")
tracker.init("pytorch_experiment", config={
"model": "ResNet18",
"optimizer": "Adam",
"learning_rate": 0.001
})
# Training loop
model = resnet18()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
criterion = nn.CrossEntropyLoss()
for epoch in range(100):
for batch_idx, (data, target) in enumerate(train_loader):
# Forward pass
output = model(data)
loss = criterion(output, target)
# Backward pass
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Log metrics
if batch_idx % 100 == 0:
tracker.log({
"loss": loss.item(),
"epoch": epoch,
"batch": batch_idx
})
# Validation
val_accuracy = evaluate(model, val_loader)
tracker.log({"val_accuracy": val_accuracy})
π¨ Dashboard Features
Metrics Visualization
- Line Charts - Track metrics over time
- Multi-metric Plots - Compare different metrics
- Real-time Updates - Live dashboard refresh
Experiment Management
- Experiment List - View all your experiments
- Configuration Viewer - See experiment settings
- Data Export - Download raw data
Comparison Tools
- Multi-experiment View - Compare different runs
- Metric Filtering - Focus on specific metrics
- Time Range Selection - Zoom into specific periods
π Security
- Token-based Auth - Secure HuggingFace token authentication
- API Key Management - Unique API keys per user
- Data Isolation - Each user's data is separate
- HTTPS Only - All communication encrypted
π οΈ Development
Local Development
# Clone repository
git clone https://github.com/yourusername/ml-tracker
cd ml-tracker
# Install dependencies
pip install -r requirements.txt
# Run locally
streamlit run app.py
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
π API Reference
MLTracker Class
class MLTracker:
def __init__(self, api_key: str, base_url: str)
def init(self, experiment_name: str, config: dict = None)
def log(self, metrics: dict, step: int = None)
def get_experiments(self) -> list
def get_experiment(self, name: str) -> dict
def delete_experiment(self, name: str)
def finish(self)
Global Functions
def init(experiment_name: str, config: dict = None, api_key: str = None, base_url: str = None)
def log(metrics: dict, step: int = None)
def finish()
π€ Support
- Issues - Report bugs on GitHub
- Discussions - Ask questions in GitHub Discussions
- Documentation - Check the wiki for detailed guides
π License
MIT License - See LICENSE file for details
π Acknowledgments
- HuggingFace for providing free hosting
- Plotly for beautiful charts
- Streamlit for easy web apps
- The ML community for inspiration
Happy Experimenting! π§ͺβ¨