A cyber risk register that your board will actually read — and what to cut from the current one.
Most cyber risk registers are written to satisfy auditors, not to inform decisions. If the board is skimming yours, that's not disengagement — it's a signal the document is doing the wrong job.
Every board I sit on or advise has a cyber risk register. Almost none of them use it. The document arrives in the pre-read pack, a heat-map is glanced at, a question is asked about the two amber items that moved, and the conversation moves on. This is taken, wrongly, as a sign that the board is disengaged from cyber. It isn't. It's a sign the register is built for the wrong reader.
A register designed for a control auditor is a very different artefact from one designed for a board. The first is an exhaustive catalogue; the second is a decision instrument. When you hand the first to the second audience, you get exactly the ritualised skim we all pretend isn't happening.
The four things to cut
Before adding anything, prune. Most registers I review are two-thirds material that should not be in front of a board at all.
- X.01Control-level entries. “Patch management SLA not consistently met” is a management issue, not a board issue. It belongs in the CISO's operational report, not the register.
- X.02Generic threats. “Ransomware” and “insider threat” as standalone rows are noise. A register row should be a specific exposure tied to a specific asset or outcome.
- X.03Scores without reasoning. A 4×3=12 with no paragraph explaining why the likelihood is 4 is worse than useless — it's a number dressed up as an analysis.
- X.04Risks with no decision. If the register entry doesn't end with either a decision the board needs to make or an explicit “accepted, no action”, the row is performing rather than informing.
What belongs on the page
The register I hand to boards has a shape you could fit on a business card. Every row is a specific exposure, framed in business outcome, sized by its potential impact on that outcome, paired with a named owner and a decision the board is being asked to make or affirm.
| Field | Typical register | Board-ready register |
|---|---|---|
| Risk title | “Third-party risk” | “Customer-data exposure via payments processor” |
| Impact | High | Est. £4–9m in regulatory & churn, 6–9mo recovery |
| Likelihood | Medium | Plausible within 24mo; sector-peer incident 2025 |
| Control posture | Partially mitigated | DPA in place · no tested failover · no contractual breach-notice SLA |
| Owner | CISO | CFO (commercial) · CISO (technical) · jointly accountable |
| Board decision | — | Approve £350k for dual-vendor architecture? Y/N this quarter. |
Notice what the right-hand column does. It turns a line item into a choice with a price tag. That is the only language in which a board can actually govern risk. Without it, you're asking directors to rank abstractions against other abstractions and expecting them to discover insight in the process.
The structure that makes it read
Beyond row-level discipline, the register itself needs a shape that respects the reader. Mine is three sections, in this order, with word counts I enforce:
1. Top of book — five material exposures, one page.
These are the risks that, if they materialised this quarter, would dominate the next board meeting. Five, no more. Each gets three short paragraphs: what it is, what's changed, what we're asking the board to do. If you cannot compress it to that, the thinking isn't done.
2. Movements — what changed since last meeting.
Every register item that moved up or down, with a one-line reason and a named owner. This is the section that rewards reading: it tells directors where the organisation's attention has actually been.
3. Appendix — the full control-level detail.
Retained for auditors, regulators, and the genuinely curious director. Not paginated into the main document. Linked, referenced, and available on request. Its existence reassures; its absence from the reading flow is the point.
The AI wrinkle
Every cyber register I see in 2026 is quietly broken by AI. Not because the risks are new — they mostly aren't — but because the exposure surface now includes systems the existing taxonomy doesn't fit: agentic workflows that write to production, third-party models handling customer data, internal tools with non-deterministic behaviour in regulated paths.
My recommendation is not to create an “AI risk” section. That's the wrong move — it makes AI exceptional when it should be integrated. Instead, audit each existing top-of-book entry with one question: does this system now have AI components that change the shape of the exposure? If yes, rewrite the row. If no, move on. That keeps the register honest without turning it into a novelty catalogue.
A good register is shorter than last year's. A great register is shorter still, and you know exactly why each row that survived is there.
One practical ask
If you're on a board and the cyber register arrives in next month's pre-read, try a small experiment: before reading it, write down the three things you believe are most likely to cause you a problem this year. Then open the document. The gap between your list and the register's top of book is the most useful single diagnostic you can run on the quality of your cyber reporting.
If the two match, your CISO is reading the same signals you are. If they don't, you have a productive conversation waiting — not about cyber, but about how risk gets surfaced to the people whose job is to decide about it.
I work with boards and CISOs on exactly this shape of problem. If you'd like a read of your current register, drop me a note.
