Track Experiments
Ship Better Models
Full reproducibility with metrics, parameters, and artifacts. Compare runs, collaborate with your team, and version models automatically.
Log everything, miss nothing
Capture metrics, parameters, artifacts, and evaluations with a simple API. Full lineage tracking with minimal integration code.
Scalar
Single values (loss, accuracy)
Line
Time-series metrics
Image
Visualizations & samples
Table
Structured data
Histogram
Distributions
Confusion Matrix
Classification results
from picsellia import Client
client = Client()
project = client.get_project("my-project")
experiment = project.get_experiment(
"my-experiment"
)
# Log hyperparameters
experiment.log_parameters({
"learning_rate": 1e-4,
"batch_size": 32
})
# Log metrics during training
for epoch in range(epochs):
experiment.log(
"train_loss",
loss.item()
)Artifact Storage
Store checkpoints, configs, and outputs
experiment.store("model.pt")Dataset Attachment
Link training data for reproducibility
experiment.attach_dataset(dataset_version)Model Export
Push to model registry with one call
experiment.export_as_model("my-model")Compare trainings instantly
See the exact training distribution, hyperparameters, and augmentations behind every performance change. Find your best model faster.
| Name | mAP | Loss | LR | Epochs | Status |
|---|---|---|---|---|---|
| exp-001 | 0.87 | 0.09 | 1e-4 | 100 | completed |
| exp-002 | 0.82 | 0.12 | 1e-3 | 80 | completed |
| exp-003 | 0.91 | 0.07 | 5e-5 | 150 | running |
Build training pipelines with ease
Picsellia CV Engine is a modular toolkit for constructing computer vision workflows. Composable steps, framework extensions, and CLI automation.
Training Pipelines
Data → Model → Results. Streamlined training processes with built-in logic.
Processing Pipelines
Dataset transformation, pre-annotation, and data cleaning operations.
Framework Extensions
Pluggable architecture supporting multiple training libraries.
Version your models automatically
Export experiments to the model registry with a single call. Track versions, compare performance, and deploy with confidence.
COCO metrics built-in
Add predictions, compare against ground truth, and compute standard evaluation metrics automatically.
Supports rectangles, polygons, classifications, and keypoints
Ready to track your experiments?
Start logging metrics, comparing runs, and shipping better models with full reproducibility.