Severity classification
Every event carries one of five severities. The classification is LLM-driven against this rubric (audited weekly by our editorial agent).
| Severity | Meaning | Examples |
|---|---|---|
info | Process notes, agenda, organisational news. | "AI Office Plenary scheduled for May 14", FAQ rewordings, navigation changes |
low | Explanatory guidance, FAQs, blog-style updates. | "AI Office publishes blog post on conformity assessment process" |
medium | New templates, standards drafts, consultation launch. | "GPAI Training Data Summary template v2.1", "CEN JTC 21 publishes draft prEN 18286" |
high | Binding obligations clarified, deadlines set, formal recommendations. | "AI Board adopts Recommendation on Annex III high-risk classification" |
critical | Enforcement actions, prohibition confirmed, immediate-effect rules. | "First Article 5 prohibition enforcement decision", "GPAI provider notification breach published" |
Why a fixed scale
If everything is "important" then nothing is. We force the model to
pick exactly one of five buckets so subscribers can filter
meaningfully — most teams set severity_min: "medium" for
Slack, "high" for paging.
Subscription filter
{
"topics": ["gpai", "high-risk"],
"severity_min": "medium"
} This means: deliver only medium, high, and
critical events on those topics.
Quality gate
Our editorial agent samples 20-50 events per week and compares the classification against the rubric. We target ≥90% agreement. When drift is detected we tune the enrichment prompt — the model is never re-trained and the rubric is the source of truth.