HIPAA Risk Assessment Requirements
This article is for educational purposes only and does not constitute professional compliance advice or legal counsel. Requirements and standards evolve, and you should consult with a qualified compliance professional about your specific situation.
Your compliance team tells you that you need to conduct a risk assessment. You've heard that everyone should do one, you know most organizations skip it or half-do it, and you're worried that whatever you produce will come back to haunt you during an audit. Here's what makes it worse: the regulation is vague about exactly what the assessment should look like, so you're not sure whether a spreadsheet counts, whether hiring a consultant is required, or whether you can do it in-house. The truth is simpler than you think. A risk assessment is the documented process where you identify the systems handling patient data in your environment, identify the threats to those systems, evaluate what could realistically go wrong and how bad it would be, assess what controls you have in place right now, and determine whether the remaining risk is acceptable to your organization. Without this foundation, you're selecting compliance controls based on guesswork. With it, you can justify every control decision to an auditor and allocate your compliance budget rationally. The risk assessment isn't bureaucratic overhead. It's the essential bedrock that makes every other compliance decision defensible.
What a HIPAA Risk Assessment Actually Is
The regulation requires a documented risk assessment. That means it can't be a spreadsheet that exists only in someone's head. It can't be a consultant's report that sits on a shelf gathering dust after they leave. It must be an organizational artifact—dated, signed by appropriate leadership, and referenced when you're making compliance decisions going forward. The fact that it's documented and discoverable means it has to hold up to scrutiny from an HHS auditor reviewing your practices.
The assessment must identify all systems, applications, and processes that handle, process, store, or transmit protected health information. Most organizations think this means the electronic health record system and maybe the backup systems. In reality, you need to cast a much wider net. If patient data flows through email, email is in scope. If clinicians access records on laptops from home, laptops are in scope. If patient lists exist in cloud storage, cloud storage is in scope. If a legacy system stores archived patient encounters that nobody uses anymore, it's still in scope because it contains PHI. Many organizations discover during the assessment that they're handling patient data in systems they didn't formally account for—a practice management system that was deployed years ago, a reporting database that gets a nightly sync of clinical data, a shared drive where office staff stores patient contact information. The assessment forces you to map the actual flow of data through your environment, which often reveals surprises.
For each system you identify, the assessment must document what data it handles. Is it identified patient information with names, medical record numbers, and diagnoses? Is it de-identified data where patient identifiers have been removed? Is it aggregated summary data showing trends but not individual records? The type and sensitivity of data matters because threats have much higher impact against systems handling identified sensitive information than against systems handling aggregate statistical data. A threat to your de-identified research database is lower risk than a threat to your active EHR.
The assessment must also identify where each system lives physically and what the access points are. Is the system on-premises in a locked data center? Running in cloud infrastructure? Accessible from employee laptops? Accessed remotely from clinicians' homes through a VPN? Each access point creates a potential threat vector, and different locations have fundamentally different threat profiles. A system locked in your data center faces insider threats and physical theft. A system accessible from the public internet faces external attacks. A system on employee laptops faces device theft and compromised home networks. Understanding location and access tells you what categories of threats you need to address.
Defining Your Scope Carefully
Scope is where many risk assessments fail because organizations either make it so broad it becomes unwieldy or so narrow it misses critical systems. You could theoretically conduct a risk assessment of your entire healthcare organization, but that's impractical and overwhelming. Instead, scope is typically defined by department, system, or business function. A hospital might conduct one risk assessment scoped to the EHR system, another to the billing system, another to the laboratory information system. Each assessment is bounded and specific.
Your scope definition must be reasonable and must be documented. If you're assessing EHR risk, specify exactly which EHR system, which departments or clinics use it, what patient populations are affected, and what data types flow through it. Don't assume an auditor understands your environment. Document the boundaries in detail so someone reviewing the assessment can understand exactly what was included and what was intentionally excluded.
Scope also directly determines what risks you address and what you don't. If your risk assessment is scoped to systems you own and operate but excludes systems that vendors manage on your behalf, then the risks from vendor systems aren't addressed in your assessment. That doesn't mean vendor risk disappears—it means you've excluded it from this particular assessment's scope. In practice, you'd typically have a separate risk assessment or vendor management process addressing those systems, and you'd have Business Associate Agreements with those vendors. But the point is clear: scope defines what's in and what's out, and that boundary must be explicit and documented.
Identifying Threats Specific to Your Environment
The assessment must identify realistic threats—the things that could actually go wrong with the systems you've identified. Common threats in healthcare include unauthorized access from external attackers (someone breaking through your firewall), unauthorized access from malicious insiders (a disgruntled employee deliberately accessing records they shouldn't), accidental disclosure (someone sending patient data to the wrong email recipient or posting it in an unsecured location), system failures where hardware or software corrupts or deletes data, ransomware attacks that encrypt your data and demand payment to unlock it, and simple negligence where data is left unsecured because nobody followed procedure.
For each system in your assessment, identify the threats that actually apply to your situation. An on-premises data center protected by your own security team faces very different threats than a cloud system managed by a vendor. A system accessed by hundreds of clinical staff has different insider threat profiles than a system accessed by three administrative people. A system storing years of archived historical patient data has different data loss exposure than a system storing only current active data. The goal is identifying threats that are specific to your environment and your systems, not creating a generic list of every bad thing that could possibly happen in healthcare IT.
Vulnerabilities are the weaknesses in your current environment that enable those threats. Unencrypted data is a vulnerability that enables unauthorized access if data is intercepted or if devices are stolen. Unpatched systems are a vulnerability that enables external attackers to exploit known security flaws. Weak access controls are a vulnerability that enables insider misuse. Default credentials still in place on systems are a vulnerability that enables unauthorized access. The assessment should identify vulnerabilities in your current environment so you can evaluate whether they're creating unacceptable risk. If you have unencrypted patient data on laptops and your threat assessment identifies device theft as a realistic threat, then you have a vulnerability-threat combination that needs addressing.
Evaluating Likelihood and Impact With Documented Reasoning
For each threat you identify, the assessment must evaluate two dimensions: likelihood (how probable is this threat in your situation) and impact (how bad would it be if the threat materialized). Likelihood ranges from low (unlikely in normal circumstances) to high (probable given your environment). Impact ranges from low (minimal patient harm, minimal regulatory exposure) to high (large-scale data exposure, significant patient harm, severe regulatory exposure). A threat with low likelihood and low impact is acceptable risk that you probably don't need to invest in controls. A threat with high likelihood and high impact is unacceptable risk that must be addressed. A threat with high likelihood and low impact might be acceptable depending on the cost of controls. A threat with low likelihood and high impact requires judgment—low probability but catastrophic if it happens means you probably need controls anyway.
The critical part: the assessment must document your reasoning, not just your conclusion. Don't just list "ransomware" as a threat. Explain your evaluation: "We evaluated the threat of ransomware affecting the EHR database. Likelihood: Medium, because we're a healthcare organization (known target for ransomware attacks) with remote employee access from home (potential infection vector), but we have firewalls, intrusion detection, and regular backups in place (mitigating factors). Impact: High, because a successful attack would prevent patient care delivery, expose patient data during encryption, and potentially violate our HIPAA obligations. Therefore overall risk is High and requires controls." That documented reasoning explains to an auditor why you made the control decisions you did. It shows rational decision-making even if the auditor might have weighted likelihood and impact differently.
Four Options for Each Risk: What You Actually Do About It
After identifying risks, the assessment must document how you're treating each one. You have four basic options: mitigate the risk by implementing controls, accept the risk as tolerable, transfer the risk to insurance or a vendor, or avoid the risk by changing operations.
Most risks are mitigated through controls. You can't eliminate the threat of external attacks, so you mitigate through firewalls, intrusion detection, and network hardening. You can't eliminate insider risk, so you mitigate through access controls and monitoring. Mitigation is the most common risk treatment, and your assessment should document what controls address what threats.
Some risks are accepted. You might determine that the risk of a specific type of insider theft is very low in your organization because you conduct background checks on all staff, have low employee turnover, and have strong security culture. You'd document: "We evaluated the risk of a healthcare professional stealing patient data for identity theft. Likelihood: Low, based on background check screening and organizational culture of data protection. Impact: High if it occurred. Mitigating controls: access logs, annual security training, regular audit reviews. Residual risk: Medium. Organizational decision: Accept, given the combination of screening, culture, and the cost-benefit of additional controls."
Risks can sometimes be transferred to insurance. Cyber liability insurance transfers some financial risk of a breach—it might cover notification costs, forensic investigation, or legal defense. But insurance doesn't transfer regulatory risk. If HHS finds that you violated HIPAA, the insurance doesn't prevent enforcement—it might cover some financial costs, but HHS still has authority to fine you and require remediation. Insurance is a financial mechanism for managing breach costs, not a compliance mechanism for managing regulatory risk.
Risks can be avoided by deciding not to do something that creates the risk. Don't store patient data longer than absolutely required—shorter retention means less data exposure. Don't allow remote access to sensitive systems if it's not operationally necessary. Don't use cloud storage if you can handle the data safely on-premises. Avoidance is sometimes the right answer, though it's rarely practical in healthcare where patient care often requires data retention, remote access, and cloud-based systems.
Documentation That Survives HHS Review
Documentation quality is what separates a reasonable compliance decision from non-compliance during an audit. Your risk assessment must be written clearly enough that an HHS auditor reviewing it can understand your environment, your threat analysis, your risk evaluations, your control selections, and your residual risk acceptance. If your documentation is clear and your risk decisions appear reasoned and proportionate, the auditor will likely accept it. If your documentation is vague, incomplete, or your risk decisions seem unjustified or overly casual, the auditor will cite compliance gaps.
Document each step thoroughly. List the systems you assessed and the business purpose each serves. For each system, list the data types it handles, the threat vectors it faces, and its physical location and access points. Identify the threats you evaluated and your reasoning for likelihood and impact assessments. Document your current controls and how you believe they address each threat. Document your residual risk—the risk that remains even after your current controls are in place. Document your treatment decision for each risk (mitigate, accept, transfer, avoid) and explain the reasoning behind each decision. If you're accepting a risk, document the business justification explicitly: why is this risk acceptable to your organization even though it could cause harm? That justification can be anything from "the cost of remediation outweighs the benefit" to "alternative operational approaches are not feasible" to "this risk aligns with our organizational risk tolerance."
The assessment must be dated and signed by appropriate organizational leadership—typically the Chief Information Officer, Chief Compliance Officer, or Chief Executive Officer. That signature shows organizational awareness and intentional decision-making. The assessment shouldn't be something IT created in a vacuum and handed to leadership as a compliance checkbox. Business leaders, compliance leadership, and IT should all be involved in the assessment and should be comfortable signing off on it.
Finally, HIPAA explicitly requires annual updates to the risk assessment. Your environment changes constantly. New systems are added, old systems are retired, threat landscape evolves, your controls improve or degrade. The assessment must evolve with your environment. Many organizations conduct a risk assessment once and never update it, which is a form of non-compliance. The regulation assumes you're continuously monitoring your environment and updating your risk assessment as things change.
Why This Is the Foundation of Everything Else
The risk assessment is the foundation because it justifies every other compliance decision you make going forward. Why did you decide to require multi-factor authentication as a mandatory control? Because your risk assessment identified unauthorized access as a high-probability, high-impact threat, and MFA directly addresses that threat. Why did you implement encryption for patient data at rest? Because your assessment identified data theft as a credible threat and encryption mitigates it substantially. Why did you implement comprehensive audit logging? Because your assessment identified insider misuse and accidental disclosure as threats, and audit logs enable detection and recovery.
Organizations that skip the risk assessment or do it superficially typically end up in one of two bad places: either they over-control and spend enormous budgets on controls they don't actually need, or they under-control and miss critical threats. Organizations with a documented risk assessment make rational, defensible decisions about where to invest effort and money. They can explain to their board why certain controls are implemented and why others weren't necessary.
The risk assessment also protects the organization during regulatory review. If an HHS audit finds a gap in your controls, you can explain: "We conducted a documented risk assessment, we identified this threat as low-likelihood in our environment based on X factors, and we accepted the risk based on Y business justification." You might still be wrong in the auditor's judgment, but you've demonstrated reasonable, documented decision-making. Without a risk assessment, you have no justification for your controls—just guesswork and hope.
Getting It Right From the Start
The risk assessment is not busywork and it's not a compliance checkbox you complete once and forget. It's the essential first step that determines what controls you actually need, justifies your control decisions to regulators, and protects your organization if something goes wrong. You can't claim compliance without a risk assessment. Organizations that treat risk assessment as a checkbox typically fail audits because they've selected controls without understanding their actual threats. The assessment must identify all systems handling patient data, identify threats specific to your environment, evaluate likelihood and impact with documented reasoning, document your risk treatment decisions with business justification, and be signed off by organizational leadership. Annual updates ensure the assessment evolves as your environment changes. This is the bedrock of everything that follows in your compliance program.
Fully Compliance provides educational content about IT compliance and cybersecurity. This article reflects general information about HIPAA risk assessment as of its publication date. HIPAA requirements evolve and interpretations vary. Consult a qualified compliance professional for guidance specific to your organization.