Dyagnosys

PHQ-8 & GAD-7 — Augmented by Facial & Voice Signals

Client-side demo • Webcam + Mic required • Data stored locally in your browser

Live Camera

Face Mesh runs on-device. We compute neutral baselines during the first ~10 seconds.

Live Audio & AI Signals

Speak normally for calibration.

Simple autocorrelation estimate.

Facial Affect Proxy
0
0–100 (higher = more tension vs neutral)
Voice Stress Proxy
0
0–100 (energy + pitch jitter)
AI Signals (Combined)
0
Average of facial & voice proxies

PHQ-8 (Past 2 Weeks)

Score: 0

Options: 0=Not at all, 1=Several days, 2=More than half the days, 3=Nearly every day

Interpretation: None–minimal

GAD-7 (Past 2 Weeks)

Score: 0

Options: 0=Not at all, 1=Several days, 2=More than half the days, 3=Nearly every day

Interpretation: Minimal

Augmented Results (Transparent Combination)

PHQ-8 (0–24)
0
Questionnaire only
GAD-7 (0–21)
0
Questionnaire only
Augmented Index
0
70% questionnaires + 30% AI signals (scaled)
Saved entries appear in “Over Time”.

Disclaimer: AI signals are heuristic proxies (facial landmark ratios, audio energy/pitch variability) and are not clinically validated measures of depression/anxiety. Use PHQ-8/GAD-7 and clinical judgment as primary references.