AI Transparency
Uptimer uses locally executed decision-support capabilities for monitoring operations. We label capabilities by maturity (heuristic, statistical, or ML-backed) and do not rely on external LLM APIs.
Capability Labels
- Heuristic assistance: incident summaries, status summaries, and root-cause hints derived from deterministic rules and scored signals.
- Statistical detection: latency anomaly detection using local statistical thresholds and bounded windows.
- ML-backed prediction: predictive incident risk, likely failure type, uptime forecasts, and game-server activity projections using local ML.NET models.
- All modes are tenant-scoped and run locally; no third-party model endpoint is required.
Modeling Approach
Uptimer implements a local intelligence pipeline directly in the application. The system combines deterministic logic, statistical analysis, and local ML.NET models to describe health, detect degradation, summarize incidents, and generate predictive signals.
This is engineered for repeatability and explainability: same inputs produce consistent output, and behavior can be audited from stored telemetry.
Predictive Pipeline (ML.NET)
- Incident Risk Score: predicts probability of an Up-to-Down transition in the next 1 hour and 24 hours.
- Uptime Forecast: predicts expected uptime for next day and next week, with a 7-day daily forecast curve.
- Early Warning State: labels monitor posture as Stable, Degrading, or Critical.
- Likely Incident Type: predicts the next likely class (Timeout, DNS, HTTP 5xx, Connection Refused).
- Game Server Activity Forecasts (Steam + Minecraft): predicts activity horizons for next 1 hour, 24 hours, 7 days, and 6 months.
- Per-horizon confidence is reported for game activity predictions; sparse data automatically switches to deterministic fallback.
- Feature Store: monitor prediction snapshots are persisted with model version, confidence, and notes.
Game Server Activity Predictions (ML.NET)
- Monitor coverage: Steam and Minecraft (Java + Bedrock), shown in monitor-type custom detail panels.
- Input signals: retained player-count telemetry from check metadata, normalized into bounded hourly windows.
- Modeling approach: local ML.NET SDCA regression with calendar-aware features plus explicit fallback.
- Prediction timelines: next hour, next 24 hours, next 7 days, and next 6 months.
- Visibility windows: public pages use the last 24 hours only; private pages use full retained history within billing retention limits.
Incident Root-Cause Hints
- Root-cause hints are tenant-scoped and generated only when both AI incident summaries and AI root-cause hints are enabled for that tenant.
- Signals include checker outputs (timeouts, HTTP codes, error patterns) and status-event recurrence patterns.
- Uptimer stores the top 1-2 inferred categories plus confidence and surfaces them in incident summary views.
- Hints are advisory and explainable; they do not auto-remediate or modify monitor settings.
Retraining and Runtime Behavior
- Models retrain on first boot before regular scoring starts.
- After first boot, retraining runs nightly.
- Scoring runs hourly and stores the latest snapshot per monitor/hour.
- When historical data is sparse, Uptimer falls back to deterministic heuristics instead of forcing low-confidence ML output.
- Game-server activity forecasts are computed locally from retained monitor history and explicitly flag heuristic fallback mode when data quality is insufficient.
Nightly Retraining Status
Last Model Trained (UTC): 2026-03-16 08:37:24Z
Next Nightly Retrain ETA (UTC): 2026-03-17 08:37:24Z
Last Retrain Attempt (UTC): 2026-03-16 08:37:24Z
Last Scoring Snapshot (UTC): 2026-03-16 14:00:00Z
Was Last Run Initial Boot: Yes
Model Version: mlnet-predictive-v1
Candidate Tenants (Last Run): 2
Tenants Trained (Last Run): 1
Tenants Failed (Last Run): 0
Total Training Samples (Last Run): 11,967
Total Training Monitors (Last Run): 7
Snapshots (Last 24h): 136
Distinct Monitors Scored (24h): 6
ML-backed Snapshots (24h): 0
Fallback Snapshots (24h): 136
Fallback Ratio (24h): 100.0%
Avg Training Samples (ML-backed): 0
This panel is aggregate-only and excludes monitor names, endpoints, payloads, and any tenant-identifying content.
Auto-refresh interval: 30 seconds.
No External LLM Dependency
- No paid model providers are required.
- No monitor payloads are sent to third-party AI endpoints.
- No secret headers, credentials, or response bodies are required for summaries.
- AI features continue to operate in local/offline environments.
Safety Boundaries
- Minimum data thresholds are required before ML predictions are treated as full-confidence model output.
- Telemetry is sanitized before summary generation.
- Inputs are capped to bounded windows to avoid runaway processing.
- Public status pages expose only the latest 24-hour game-activity window, including forecast context.
- AI output is informational only and does not execute tools or mutate monitor config automatically.