Skip to main content
Why BreachCraft
Mike Piekarski
12 min read

The 5 Gaps We Find in Almost Every Security Program

After hundreds of gap assessments, the same 5 security program gaps keep showing up. Here's what they are, why they persist, and how to actually fix them.

The 5 Gaps We Find in Almost Every Security Program

After doing hundreds of cybersecurity gap assessments, I expected the findings to get more exotic over time. They haven’t. The same failures show up in organizations of every size, across every industry, regardless of how much they’ve spent on security. Not zero-days. Not sophisticated nation-state tradecraft. Basic program failures that have been there for years, sometimes a decade, sitting quietly behind a stack of compliance checkboxes.

That’s not cynicism. It’s actually useful information. It means the problems are solvable, and it means they’re predictable enough that we can walk into almost any assessment and know roughly where to look.

Here are the five gaps we find most often. If you want the primer on what a gap assessment is and how the process works, start with What Is a Cybersecurity Gap Assessment or How to Run a CIS Controls Gap Assessment. This post assumes you’re past that and want to know what we actually find.


Gap 1: Patch Management Exists on Paper, Not in Practice

The policy says 30 days. The reality is six months, sometimes longer.

We ran an assessment for a regional financial services firm in New Jersey (about 400 employees, a full IT team, a dedicated security analyst). They had a patch management policy that had been formally approved by the board. They were running monthly vulnerability scans. On paper, the program looked fine.

We pulled the scan data from the last four quarters and did something their team hadn’t done: we compared the same CVEs across reports to see how many were aging. Forty-three high and critical vulnerabilities had appeared in at least three consecutive quarterly scans without remediation. Two of those were on internet-facing systems. One was a known-exploited vulnerability that CISA had added to its KEV catalog eight months prior.

When we sat down with the IT manager, the problem was immediately obvious: patching was everybody’s job, which meant it was nobody’s job. Tickets got created, but there was no owner accountable for closure. The security analyst flagged findings; the sysadmins patched when they had time. Nobody was tracking mean time to remediation, and nobody was escalating when patches weren’t applied.

The fix wasn’t a new tool. They already had the scanner and the ticketing system. The fix was process and accountability: assigning patch ownership by system category, setting SLAs with actual teeth, and adding a weekly 15-minute review where open items older than the SLA got escalated to the IT director. Within 90 days, they had cleared the backlog and were hitting 30-day SLAs on critical patches consistently.

Why automated tools miss this: scanners report current state. They don’t tell you how long a vulnerability has been present or whether remediation is actually happening. That analysis requires pulling historical data and asking questions, not just running a scan.


Gap 2: Access Reviews Are Checkbox Exercises

This one hurts the most when we find it, because it usually means a combination of compliance theater and real risk sitting side by side.

A manufacturing company in York County, Pennsylvania engaged us after their cyber insurance carrier asked pointed questions about identity and access management during renewal. They had been conducting annual access reviews for three years: spreadsheets signed off by managers, filed with HR, sent to the auditors. The reviews were happening. We were about to find out what “happening” actually meant.

We pulled their Active Directory export and cross-referenced it against their HR system. Twenty-two accounts belonging to former employees were still active, people who had left the company between six months and two years prior. Of those, six had VPN access. Three had access to the ERP system where production and financial data lived. One former IT contractor had domain admin rights.

The managers had been signing the access review spreadsheets without actually checking each account. The spreadsheet had been pre-populated with the prior year’s data, and reviewers were confirming that the access “looked right” rather than actively verifying it. Nobody had built a process to pull current HR termination data and compare it against the active account list.

The fix required connecting two systems that had never talked to each other: HR and IT. They implemented a termination workflow: when someone left, HR triggered an automatic ticket to disable accounts within 24 hours. The quarterly access review got rebuilt as an actual exception report: here are all accounts where access has changed since last quarter, here are accounts not used in 90 days, here are service accounts with elevated privileges. Reviewers were now looking at outliers, not confirming a pre-filled list.

The insurance renewal went fine. More importantly, they found and closed a real exposure.

Why automated tools miss this: a tool can tell you who has access. It can’t tell you whether that access is appropriate or whether the review process is genuine. That requires talking to the people who run the reviews and looking at what the output actually contains.


Gap 3: Incident Response Plans Have Never Been Tested

The plan exists. It’s detailed. It’s filed somewhere. Nobody on the current team has ever opened it.

We were brought in by a mid-sized healthcare organization in Delaware after a ransomware incident at a peer organization made their leadership nervous. They wanted to know if they were ready. They had a 47-page incident response plan, developed by a consulting firm three years earlier. It referenced their SIEM by name, included escalation trees, covered regulatory notification requirements under HIPAA. It looked thorough.

We asked for a tabletop exercise before we dug into the documentation. Fifteen minutes in, the problems were apparent. The escalation tree listed the CIO by name; she had left the company 18 months ago. The plan referenced an external IR retainer with a vendor the company had since dropped. The SIEM mentioned in the plan had been replaced with a different product, and the containment procedures referenced features the old system had that the new one didn’t.

More telling: the current security team had never run a drill. When we asked what would happen if they discovered encrypted files on their file server at 9 PM on a Friday, the answers varied significantly between the two people in the room. The CISO thought they’d call the CEO first. The IT manager thought they’d try to contain it first and notify leadership after they understood the scope. Neither answer was wrong, exactly, but the fact that they didn’t have a shared understanding of the first 30 minutes of a response was a real problem.

We ran a tabletop exercise the following month with the full response team. The gaps that surfaced (who declares an incident, who handles external communications, who calls the cyber insurer) all got documented and resolved. The plan got updated and tested twice more in the following year.

Why automated tools miss this: no scanner finds a stale IR plan. This gap requires reading the document, talking to the people named in it, and running a scenario. It’s entirely invisible to any technical assessment that doesn’t include a process review.


Gap 4: Security Awareness Training Has No Teeth

The training gets done. The phishing simulations run. The click rates don’t move.

A professional services firm in the Philadelphia suburbs had a mature-looking awareness program. Annual training through a well-known platform, monthly phishing simulations, completion reports submitted to the board. They were spending real money on it. Their click rate on phishing simulations had been between 22% and 28% for two years running.

When we looked at how the program actually worked, the issue was immediately clear: the phishing simulations had no downstream process. Someone clicked a link, got a brief “you’ve been phished” message, and that was the end of the interaction. Results were tracked in aggregate and reported to leadership as a metric. Nobody was looking at which individuals were clicking repeatedly, and nobody was doing anything different for the people who clicked every single month.

About 40 employees had clicked on at least three simulations in the past year. Of those, 12 had clicked on five or more. None of them had received any additional training, coaching, or acknowledgment that they were a repeated risk. Several were in finance or HR, roles with access to sensitive data and the ability to initiate wire transfers.

The fix here wasn’t buying a better training platform. It was building a tiered response: first click gets the pop-up and automated training module, second click gets a conversation with their manager, third click gets mandatory supplemental training with IT involvement. High-risk roles like finance and executives got more frequent simulations and more targeted content. Within six months, the click rate had dropped to 11%.

This is also where vulnerability assessment of the human layer meets program design. Technical controls can catch some phishing attempts. The human controls have to be deliberately built and maintained; and that means treating repeated clickers as a specific, addressable problem rather than an acceptable average.

Why automated tools miss this: the training platform reports completion rates and click rates. It doesn’t report on whether the program is actually changing behavior or whether the response to failures is appropriate. That analysis is in the program design, not the tool output.


Gap 5: Logging and Monitoring Are Incomplete

The SIEM is deployed. The dashboards look active. But critical systems aren’t shipping logs, and the alerts that do fire are mostly ignored.

A financial services company in Connecticut had invested significantly in their security operations. They had a SIEM, a SOC agreement with an MSSP, and 24/7 monitoring contractually. During our assessment, we asked to see the list of log sources feeding the SIEM.

Of their 14 domain controllers, 6 were not shipping logs. Their primary database server, the one holding customer financial records, had never been onboarded as a log source. Three legacy Windows servers that still ran line-of-business applications were excluded because “they’re old and the log volume is high.” The MSSP was monitoring what they’d been given. Nobody had done a systematic inventory of critical assets and verified that each one was represented in the SIEM.

We also reviewed 30 days of alerts. The MSSP had generated 847 alerts in that period. The client’s internal team had investigated 23 of them. When we asked about the rest, the answer was that the MSSP was responsible for triage. When we asked the MSSP, they clarified that they escalated anything they considered high severity, and everything else was visible in the portal for the client to review. Nobody was reviewing the portal.

The detection rules were the defaults that had come with the SIEM. No tuning had been done for the client’s environment. Rules that generated hundreds of false positives per week were still firing, contributing to alert fatigue that made the entire monitoring function unreliable.

The remediation involved three tracks: completing the log source inventory and onboarding the missing critical systems, establishing a formal alert review process with clear escalation criteria, and scheduling a detection rule tuning engagement, which is a specialized piece of work that’s often folded into a broader penetration testing or assessment program. The Virtual CISO relationship they started with us afterward gave them ongoing oversight to make sure the tuning actually happened and was maintained.

Why automated tools miss this: a tool can only analyze what it receives. If critical systems aren’t feeding the SIEM, no automated analysis of SIEM data will reveal the gap. Finding it requires an inventory-based approach: start with the asset list, compare it to the log source list, and look for discrepancies.


Why These Keep Showing Up

The honest answer is that most organizations treat a cybersecurity gap assessment as a point-in-time event rather than as the beginning of a program. They get the report, they address the critical findings, and the medium and low items get moved to a backlog that nobody reviews for 18 months. By the time the next assessment happens, some of those items have become critical.

The other factor is that gaps tend to accumulate at the intersection of people and process, which is exactly where automated scanners can’t see. A scanner can find a missing patch. It can’t find that the patching process is broken because nobody owns it. It can identify accounts with elevated privileges. It can’t find that the access review process is theater because nobody actually looks at the data.

This is why we spend as much time talking to the people who run security operations as we do looking at technical evidence. The stories above weren’t discovered by running tools. They were discovered by asking how things actually work, then verifying the answer against data.


Start With the Right Questions

If you want to understand what a gap assessment involves or how CIS Controls map to the process, those posts cover the methodology in detail.

If you’re in healthcare and trying to satisfy both HIPAA requirements and cyber insurance demands, or in financial services facing GLBA and state-level requirements, the five gaps above are probably in your environment right now; they’re in almost everyone’s.

We work with organizations across the mid-Atlantic and beyond to find these gaps, build realistic remediation plans, and stay involved long enough to see the fixes actually land. If you want to know which of these five apply to your program, schedule a conversation with us. We’ll tell you what we’d expect to find before we even start.

Ready to Strengthen Your Defenses?

Schedule a free consultation with our security experts to discuss your organization's needs.

Or call us directly at (445) 273-2873