Documenting Security Risks
This article is educational content about risk documentation and is not professional compliance advice or legal counsel.
Once you've completed your risk assessment, you have findings. Now you need to document them in a way that's clear, defensible, and actionable. The documentation serves two critical purposes: it creates the audit trail showing that you thought about risk systematically, and it guides what controls you build and in what order. The challenge isn't just documenting that "we have a risk." It's documenting in a way that's specific enough to guide decisions, defensible enough to survive auditor scrutiny, and organized enough that someone can actually use the documentation to manage the program going forward.
Risk Statements: Clarity as the Foundation
A good risk statement should be clear enough that someone who wasn't in the assessment room can understand what you're worried about. "Data breach" isn't a risk statement. "Unencrypted customer data on backup tapes stored off-site in a third-party facility without access controls could be stolen by vendor storage facility employees" is. The first is vague and could mean anything. The second is specific about the asset (customer data), the weakness (lack of encryption and access controls on stored tapes), the threat (theft by vendor staff), and the consequence (data breach).
A solid risk statement names the asset you're concerned about, describes the vulnerability or weakness that would allow harm, identifies the threat or attack vector, and indicates what the potential consequence would be. It should be long enough to be specific but short enough to fit on a slide without being unwieldy. When you're writing it, assume the reader doesn't have your institutional knowledge and has never attended your risk assessment. The statement should stand alone and be understandable to someone reading it for the first time.
This clarity matters because it becomes the reference point for everything that follows. Your treatment decision will reference this statement. Your control implementation plan will reference this statement. When an auditor asks why you implemented a specific control, you'll point back to this statement and the underlying assessment. If the statement is vague, all of the work that follows is less grounded.
Threat and Likelihood: What Could Happen
The threat is the external factor that could cause harm. A data breach threat might include employee theft of data, external hacking, ransomware infection, accidental exposure, or compromise of a vendor that has access to your systems. Each threat should be described clearly—what's the attack vector, who's the potential attacker, what's the scenario. Don't assume the reader understands the threat landscape in your industry.
Likelihood is your estimate of how probable the threat is. The scale varies, but the principle is consistent. "Very likely" means you'd expect this to happen regularly if you didn't have controls in place. "Likely" means it could happen in the normal course of business. "Possible" means it could theoretically happen but you're not expecting it. "Unlikely" means it's a tail risk that would be surprising if it occurred. The language matters less than consistency—use the same scale across all your risks so they're comparable.
Likelihood should reference historical data where you have it. If you've had similar incidents in the past, that informs your estimate. Industry benchmarks matter too—how often do attacks like this happen to organizations similar to yours? For threats you've never experienced, reference class thinking helps: what do similar organizations experience and how often? That gives you a baseline to estimate from.
Vulnerabilities and Exploitability: The Weaknesses That Matter
A vulnerability is a weakness in your defenses. It might be a missing control entirely—no encryption, no access logging. It might be a poorly implemented control—password policy exists on paper but isn't enforced in your systems. It might be a gap in training—staff don't know to verify suspicious emails. It might be a process failure—change management procedures exist but aren't followed consistently. The vulnerability should be specific enough that someone could actually address it. "Inadequate security" is too vague. "Customer data is stored in plaintext in the database and transmitted over unencrypted HTTP connections" is actionable.
Exploitability describes how easily the threat could take advantage of the vulnerability. A vulnerability that requires advanced technical skills to exploit is different from one that any employee could accidentally trigger. Your documentation should reflect that difference. Not all vulnerabilities are equally dangerous, and not all threats are equally likely to exploit any given vulnerability. A technically sophisticated attacker might exploit a subtle vulnerability that a typical employee wouldn't even notice, while a less skilled threat actor might easily exploit an obvious one.
Impact and Consequences: What Actually Happens
Impact is what happens if the threat exploits the vulnerability. It might be financial—direct loss of revenue, regulatory fines, litigation costs, cost of notification and credit monitoring. It might be operational—systems are down, business is disrupted, customer service is affected. It might be reputational—loss of customer trust, brand damage, market perception. It might be regulatory—enforcement action, audit failure, compliance violation. Different impacts matter differently depending on your organization's priorities and business model.
Documenting impact clearly requires thinking through the second and third-order consequences, not just the immediate effect. A data breach isn't just the breach itself—it's potential regulatory fines depending on applicable regulations, mandatory notification costs, credit monitoring costs if personal information was involved, litigation costs and settlements, lost customers due to trust damage, and significant internal staff time for forensics and response. A ransomware attack isn't just the ransom demand; it's system downtime, revenue loss during recovery and restoration, cost of forensics and incident response, potential regulatory notification if health or personal data was affected.
Your documentation should quantify impact where possible—potential fines based on applicable regulations, estimated revenue loss based on system criticality, notification costs. For non-quantifiable impacts that still matter, describe them clearly—reputational damage to your brand, compliance violations, loss of customer confidence. The impact drives your risk score and guides your treatment decisions later, so it needs to be well-reasoned, specific, and credible.
Risk Scoring: Putting It Together
Your risk score is typically likelihood multiplied by impact. A risk that's very likely and would cause severe harm gets a higher score than a risk that's unlikely and would cause minimal damage. The score is most useful as a way to compare risks and establish priorities, not as an absolute measure of safety. It answers: relative to all our other risks, how important is this one?
The rating—"Critical," "High," "Medium," "Low," or however you classify risks—helps communicators and decision-makers quickly understand where your organization stands. The exact thresholds depend on your methodology, but the principle is consistent: highest-rated risks get attention and resources first. Document how you arrived at the score. What made you assess likelihood as "high"? What factors drove your impact assessment? This documentation creates transparency and makes it possible to revisit your estimate later if circumstances change.
Treatment Decisions: What You'll Do About It
Every risk needs a treatment decision documented. That decision is one of four options: accept the risk (we'll live with this), mitigate it (we'll reduce it with controls), transfer it (we'll shift it with insurance or contracts), or avoid it entirely (we'll eliminate the threat). The decision should be justified. Why are you accepting this risk rather than mitigating it? Why is mitigation the right choice? What controls would reduce it adequately?
Risk treatment is fundamentally a business decision, not purely a technical one. A control that would technically eliminate a risk might be so expensive or operationally disruptive that acceptance makes more business sense. A vendor contract might not offer the protection you'd like, but it's the best transfer available at reasonable cost. That tradeoff should be documented and approved. If the risk is being accepted, it should be conscious and explicitly approved by leadership rather than simply overlooked.
Remediation Plans: Making Treatment Real
Your risk documentation should include a plan for how the treatment will be implemented. If you're mitigating with controls, what gets done, by when, and by whom? "We'll implement encryption" isn't a plan. "We'll implement encryption on customer data at rest using AES-256, beginning with [system], targeting completion by March 31, assigned to [person's name], with approved budget of [amount]" is a plan. The timeline needs to be realistic and approved by the people who'll execute it and the people who need to approve spending.
For longer-term remediation projects, breaking them into phases and milestones makes tracking easier and progress more visible. Instead of "implement new access control system, timeline TBD," you have "Phase 1: requirements gathering and vendor selection (target Q1), Phase 2: pilot implementation in non-critical systems (target Q2), Phase 3: full implementation and enforcement (target Q3), Phase 4: validation and reporting (target Q4)." That level of granularity makes it possible to track actual progress and catch slippages early.
The plan bridges the gap between risk assessment document and actual compliance program work. If the plan is vague, the risk sits in the register and nothing changes. If the plan is specific and owned, it becomes work that gets done.
Control Evidence: Proving It Works
For risks you're mitigating with controls, your documentation should note what control you're implementing and what evidence proves it's working. If the risk is unencrypted data in transit, the control is "encryption implemented for data in transit" and the evidence includes policy statement, configuration screenshots showing encryption is active, and audit logs proving all traffic is encrypted. If the risk is unauthorized access to systems, the control is "access control implementation" and evidence includes documented access requests approved by management, audit logs of access changes, and periodic access review reports.
Documentation at the control level comes later—in your full control documentation and evidence collection process. But at the risk level, you're noting what control addresses the risk and establishing the connection to evidence that backs it up. This is where risk assessment connects to your compliance program's control framework. Without this connection, risks and controls feel disconnected, and your program lacks coherence.
Residual Risk: Living with What Remains
Risk mitigation reduces risk but rarely eliminates it entirely. The risk that remains after you've implemented controls is residual risk. You might have encryption that could theoretically be broken with sufficient computing power and time. You might have access controls that could be bypassed by a highly determined attacker with insider knowledge. You might have incident response procedures that work correctly 95% of the time but could fail under extreme circumstances.
Residual risk should be explicitly acknowledged in your documentation. Document what risk remains, why you consider it acceptable, and what controls you have that address the remaining risk. It doesn't mean you've failed at control implementation. It means you've implemented cost-effective controls that reduce risk to acceptable levels. Auditors expect to see residual risk addressed honestly; they don't expect to see zero risk anywhere. Acknowledging residual risk credibly demonstrates that you understand what your controls actually achieve and what they don't.
You now understand the components of comprehensive risk documentation: clear risk statements that someone could understand without being in the assessment room, specific threat and vulnerability descriptions, articulated impact that includes second-order consequences, justified risk scoring that shows your work, documented treatment decisions with business justification, specific remediation plans with timelines and owners, identified controls with evidence, and honest acknowledgment of residual risk. When documentation is this thorough, your risk assessment becomes the foundation of your compliance program. It guides control selection, justifies investment, and provides the evidence that your compliance program is built on systematic analysis rather than intuition or fear.
The documentation also speaks to auditors in their language. When they ask why you have certain controls but not others, you can answer with reference to your risk assessment and documented treatment decisions. That foundation makes every conversation about compliance more credible and defensible.
Fully Compliance provides educational content about IT compliance and cybersecurity. This article reflects general information about risk documentation as of its publication date. Standards, methodologies, and best practices evolve—consult a qualified compliance professional for guidance specific to your organization.