Building an Internal Audit Program

Reviewed by Sarah Mitchell, CISA

An internal audit program identifies compliance gaps before external auditors do, and the Institute of Internal Auditors found that organizations with mature internal audit functions had 44% fewer external audit findings and reduced audit costs by an average of 30%. Internal audit is preventive rather than reactive, transforming audit surprises into known issues with remediation already underway.

You have an external auditor scheduled. The auditor's job is to show up, examine your controls, and report on what works and what doesn't. Your internal audit program is how you get there first. It's the difference between audit surprises that trigger emergency remediation and audit findings that you've already identified and begun fixing. It's preventive rather than reactive, and the financial and operational difference is substantial.

An internal audit doesn't need to be elaborate. You don't need a dedicated audit staff or expensive software. What you need is systematic: someone or a small team spends a defined amount of time each year walking through your controls, testing whether they actually work, documenting what they find, and feeding those findings back into your control environment so you improve. That systematic approach transforms internal audit from a theoretical best practice into actual risk management that changes your organization.

Defining Your Audit Scope and Frequency

The first decision is what you're going to audit. Your internal audit program should cover all significant controls across your business over the course of a year. For most organizations in a compliance-driven context, that means governance and risk management, access control, encryption and data protection, monitoring and logging, incident response, vendor management, and how you handle sensitive data. The specific areas depend on your business and what you're preparing to be externally audited against. If you're preparing for a SOC 2 audit, your internal audit scope should mirror SOC 2's trust service criteria. If you're preparing for HIPAA, scope should mirror HIPAA's technical safeguards.

Frequency matters because you need to know your control environment before auditors do. Most organizations run internal audits at least annually, covering the entire control landscape in one cycle or in rolling quarterly reviews where different areas rotate through the audit process. A quarterly rolling approach means each quarter focuses on a different set of controls or business area, so by the end of twelve months you've examined your entire control environment once. This is particularly useful for larger organizations where trying to audit everything simultaneously would be disruptive. What matters is that you have a documented plan that says what's being audited when, and that you actually follow that plan. Documentation creates accountability, and accountability is what prevents audit plans from being drafted in January and forgotten by March.

Testing Controls Against Reality

Once you've defined what you're auditing, the real work is testing: verifying that controls actually operate as designed. If your policy says unused user accounts are disabled after ninety days, testing means examining your actual user account repository and confirming that unused accounts from more than ninety days ago are actually in a disabled state. If your security monitoring control says you're monitoring for suspicious login attempts, testing means examining the monitoring logs and confirming they exist and are being actively reviewed. Testing proves that the gap between your documentation and your actual operations isn't a chasm.

For large organizations with enormous datasets, sample-based testing is practical. You can't examine every user access review from the past year. Instead, you select a representative sample—perhaps thirty accounts or three months of data—and test those thoroughly. The sample should be random or representative so it actually reflects what's happening across your entire control population, not just the easy cases you happen to pick.

Document your test results systematically: what you examined, what you expected to find, what you actually found, and whether the control operated as designed. If you find control deviations, those become findings. A finding might be "in our sample of thirty access reviews, three were missing proper manager approval," or "logging is configured but logs are not being actively reviewed." The specificity matters because it tells you exactly what to fix.

Reviewing Documentation and Paper Trails

Controls leave evidence. There are access request approvals, training records, system configuration screenshots, policy acknowledgments, change management logs, security testing results. Reviewing documentation confirms that controls not only exist but leave proper records of their operation. A control that happens but leaves no trace is a control that's hard to defend to an external auditor and even harder to verify you're actually doing consistently.

Documentation review examines whether these records exist and are complete. Are access requests properly documented with approval from the requesting manager? Are employee security training records complete with dates and completion confirmation? Are system changes properly documented with authorization and implementation dates? Are security policies actually being acknowledged by staff, with evidence of that acknowledgment? Are incident response actions documented so you can show what was done and when?

Complete documentation suggests your controls are operating. Gaps in documentation—missing approvals, incomplete records, no evidence that something supposedly happening actually happened—suggest either that controls aren't operating consistently or that the documentation process itself is broken. If your policy requires quarterly access reviews but you can only find two quarters of documentation in the past year, that's a red flag that either reviews aren't happening quarterly or you're not capturing them.

Walking Controls to See Them in Action

Documentation tells you what should have happened. Walking controls means observing what actually happens. If your control says "we require security awareness training for all employees," walking the control means attending a training session, confirming it covers the topics your policy specifies, and observing that it actually occurs. If your policy says "we conduct quarterly access reviews," walking the control means observing an actual access review being performed, seeing who participates and how the review actually functions, not just seeing a record that says reviews happened.

Walking controls serves two purposes. It confirms the control is real and operational, not just documented. It also often reveals how the control is actually being performed versus how it's supposed to be performed according to your policies. Sometimes you learn that the documented procedure is less cumbersome than what people think, so they've created a workaround that's actually more efficient. Sometimes you learn that the documented procedure is unclear or impractical, so people have adapted it in ways that might create gaps. Either way, you understand your actual control landscape versus your documented control landscape.

Walking every control isn't always feasible. You can't observe a "we monitor logs daily" control by sitting in the security operations center for a year. But for major controls—particularly governance activities, high-risk areas, and controls that involve human decision-making—walking the control gives you qualitative insight beyond what documentation provides.

Interviewing the People Running Your Controls

The people who perform controls understand them intimately. An interview with your network security team about how they monitor systems might reveal that monitoring is actually more comprehensive than your formal documentation states, or conversely, that coverage has gaps you didn't know about. Interviews with your access control owner might surface challenges with your procedure that create bottlenecks or cause people to cut corners. Conversations with your incident response team might reveal that your incident response plan is comprehensive on paper but people don't actually follow it because the plan doesn't account for how your environment actually works.

Structured interviews using a prepared list of questions are more effective than casual conversations. You're trying to understand how the control actually works, what challenges exist, and what might be improved. The tone matters significantly. People who see internal auditors as partners trying to help improve controls are far more open than people who see auditors as enforcers coming to find problems. Approaching conversations with the perspective of "we're trying to understand your control, find where we can improve, and support you in doing better" yields better results than "we're checking whether you're doing what you should."

Documenting Findings and Tracking Remediation

As you test, observe, and interview, you'll identify gaps and opportunities for improvement. Document these findings specifically. Rather than "access control needs improvement," a finding should be "in our sample of thirty access reviews covering Q1 and Q2, approval documentation was missing for three reviews, suggesting the approval process may not be consistently followed." Classify findings by severity. Critical findings are ones where a control is completely broken or missing and directly impacts your security. Major findings are ones that weaken a control but don't eliminate it entirely. Minor findings are ones that affect efficiency or best practices but don't significantly impact your actual security posture.

Findings only matter if they drive improvement. For each finding, develop a remediation plan: what specifically will be fixed, who's responsible, what's the deadline, and how will you know it's fixed. Tracking remediation ensures that findings don't just sit in a report collecting dust. You follow up on the plan, confirm that work is happening, and verify that when the deadline arrives, the issue is actually resolved.

Communicating Results and Driving Improvement

Share audit results in a report that describes what was audited, what testing was performed, significant findings, remediation plans, and your overall assessment of control effectiveness. The tone should be balanced: acknowledge controls that are working well while highlighting areas for improvement. This isn't about creating a perfect audit report. It's about creating an honest picture of your control environment.

Different audiences need different levels of detail. Technical teams need detailed findings with specific recommendations for how to improve. Executive leadership needs a summary of your overall control posture and the main areas you're focusing on for improvement. Your board might want just the top-line assessment and any critical findings. Communicate findings early rather than surprising teams with a final report. Discuss findings as you discover them, and then formalize them in the written report.

The real value of internal audit is using findings to actually improve your controls. An issue found in this year's audit that's still present in next year's audit is a sign that your audit program isn't driving improvement. An issue found and fixed is evidence that your internal audit process is working. Tracking remediation across multiple audit cycles shows whether your control environment is improving, staying stable, or degrading. Improving trends suggest your program is effective. Stable trends suggest controls aren't getting better. Degrading trends signal that your program needs strengthening.

The Preventive Power of Internal Auditing

An internal audit program is fundamentally preventive. It finds problems when you can fix them quietly through your normal improvement processes, before external auditors arrive and document findings in official reports. External auditors discovering a significant control gap creates audit findings that may be shared with your board, your clients, and potentially regulators. You fixing a gap before external auditors discover it means the gap gets addressed in your continuous improvement process. That's the difference between findings appearing in official audit reports versus improvements happening in the normal course of operations.

Organizations that run systematic internal audit programs tend to have cleaner external audits, faster audit cycles, better relationships with external auditors, and control environments that actually improve over time. You now understand what an internal audit program looks like: defining scope and frequency, testing controls systematically, reviewing documentation, observing controls in operation, interviewing stakeholders, documenting specific findings with severity classifications, planning and tracking remediation, and using those findings to drive genuine improvement. That systematic approach is what makes internal audit valuable.


Fully Compliance provides educational content about IT compliance and cybersecurity. This article reflects general information about internal audit programs. Standards, requirements, and best practices evolve — consult a qualified compliance professional for guidance specific to your organization.