Security

How to write security reports that get results

Lessons from 1,000 Incidents: Why the security reports that drive real change speak the language of business impact, not technical detail.

4 min read

After a decade in incident response—roughly 1,000 incidents across organisations ranging from five-person joinery shops to multinational retail enterprises—I've developed a clear sense of what separates "good security" from "good control coverage."

The firms that survive incidents, and prevent them, are almost never the ones with the most tools or the biggest budgets. They're the ones who understand their resilience: where they'd actually break under pressure, and what that would cost them.

Here are the lessons that changed how I approach security assessments.

Compliance framing creates false confidence

Cyber Essentials, SOC 2, ISO 27001—these frameworks exist to make it easier to do business with other companies. Executives sponsor these programmes because they generate revenue: faster onboarding, shorter deal cycles when responding to RFPs, increased consumer confidence.

None of this actually makes an organisation more secure. At best, there's a small correlation between certifications and resilience, but it's not a causal relationship. It's just a pattern.

When you write a report that leans heavily on compliance gaps, you're speaking a language that creates false confidence. Clients think "we passed the audit, we're secure" or "we failed this control, that's our problem." Neither is true.

Clients respond to money, not maturity scores

Nobody outside of security knows what "Level 3 maturity" means. But tell a client "you have a high insolvency risk from a major incident" and suddenly you've got board-level attention.

The principle here is simple: security programmes cost money, and for any commercial venture, money must provide a return on investment. If your recommendations cost more than the value they deliver, why would anyone implement them?

I've known enterprises that simply accept they'll have a major incident every one to two years, because the cost of transforming their security architecture would exceed the impact of those incidents. This is a valid position. If you can help your client weigh the pros and cons with precision, you become one of their most trusted partners.

The trick is having the data and vocabulary to model the commercial implications. Frame your findings in terms of financial exposure, operational downtime, and recovery costs—not abstract severity ratings.

Lead with "time to low risk"

Executive audiences don't understand CVSS scores, and they're not going to read your 47 technical findings. Include those for context and for technical readers, but put them in an appendix.

Instead, lead with the programme required to move from current state to acceptable state. How many months will it take? How much will it cost? Who will do the work? How do they measure success?

This transforms a scary report into an actionable project plan. Your client should feel like they've been handed a solution, not a problem.

Periphery systems are where organisations actually die

Core infrastructure is usually fine. Most organisations now have M365, EDR, and MFA on their main systems. If they've put even minimal effort into hardening defaults, or have an MSP handling this, they're generally in a strong position.

The reason these organisations still get compromised is the exceptions: machines without endpoint detection, servers missing from asset registers, an SSL VPN nobody knew existed.

These are often quick wins. Migration might be painful, but it's ultimately a short programme of work with a high reduction in risk. Your report should surface these peripheral blind spots clearly—they're where the real value lies.

The report that gets results

The security reports that drive change are the ones that speak the language of business impact. They quantify exposure in terms executives understand, present a clear path from current state to acceptable risk, and offer solutions rather than problems.

Technical detail matters, but it belongs in the appendix. Lead with what the organisation stands to lose, what it will take to fix, and how long they'll need to get there.

That's the report that gets funded.

The Path Forward

Building a Security Programme That Sticks

The shift from "checking boxes" to "building resilience" requires a fundamental change in how we communicate risk. Security professionals understand technical controls. Executives understand business outcomes. The most effective assessments translate between these languages, giving leadership the data and vocabulary to make conscious risk decisions.

That clarity, about what you're actually protecting, what it costs to protect it, and what happens when defences fail, is what separates organisations that survive incidents from those that don't. It's not about having the most tools. It's about having clarity.

Want to see this approach in action? View a sample cyber assessment to see how these principles translate into a real deliverable.