April 7 / 2026 / Reading Time: 7 minutes

Test Your People, Not Just Your Filters

Your phishing simulation told you 12% of employees clicked, unfortunately this is mostly meaningless. What could an attacker do next? 

This is what you need to know, if 15 people clicked a fake Amazon delivery email who cares if you have fantastic security controls which block the bad guy getting anywhere?

Most organizations are measuring the wrong thing. They're tracking click rates on simulated phishing emails and calling it a security program. Click rates tell you what your employees recognized. They don't tell you what an attacker can chain. Those are different questions — and only one of them matters when a real adversary is working your environment.

The only way to know whether your human attack surface is exploitable is to run the attack.

Why Phishing Simulations Don't Tell You Enough

"Traditional" phishing simulations are required for cyber insurers check for them. Compliance frameworks expect them. They have a place (despite being mostly ineffective).

But they answer a narrow question: did your employees click on a suspicious email? They don't answer the questions that matter more — like whether your help-desk can be talked into resetting credentials over the phone, whether your office can be physically accessed by someone with a confident walk and a plausible story, or whether a credential harvested from a phishing click could be used to reach something that matters.

According to Verizon's 2024 Data Breach Investigations Report, 68% of breaches involve a non-malicious human element — a person falling for a social engineering attack or simply making a mistake. Their 2025 report puts that figure at nearly 60%, even after removing malicious insiders from the count. Either way, the human layer has been the dominant factor in breaches for years running. It hasn't budged. Because awareness training and phishing simulations don't close the structural gaps — they just teach employees to recognize last year's attacks.

Attackers don't keep using patterns that get caught. They watch what works, drop what doesn't, and move on. 

Phishing click rate is a training metric. What you need is an attack metric.

What Is Social Engineering Testing?

Social engineering testing simulates real attacker behavior against your organization's people, processes, and physical environment — not just your email filters. Where phishing simulations measure awareness, social engineering testing measures exploitability.

A properly scoped engagement covers three dimensions:

  • Physical: Tailgating through secured doors, badge cloning, walking in behind a delivery. Less sophisticated than a zero-day. Often more reliable.
  • Technical: Spear phishing, vishing (voice phishing), credential harvesting, adversary-in-the-middle attacks designed to bypass MFA. Built around your actual environment — your VPN client, your org chart, your vendor relationships.
  • People: Pretexting against your helpdesk, vendor impersonation, manipulation of approval and access workflows. The scenarios that never appear in a generic phishing simulation because they require someone to actually do the homework.

The distinction that matters: scenarios are built around how your organization actually operates. Your vendors. Your help-desk scripts. Your physical layout. Your specific access flows.

Generic simulations find generic gaps. Targeted testing finds yours.

What Is a Human Vector Audit?

A Human Vector audit is a structured engagement that tests the full breadth of your human attack surface — physical, technical, and people — using realistic adversary scenarios tailored to your organization. It goes beyond measuring who clicked and maps where the damage actually goes.

Where a standard phishing simulation ends at the click, a Human Vector audit traces the chain:

  • Who clicked — and whether the credential they entered could unlock something real
  • Whether MFA can be bypassed — via callback social engineering, adversary-in-the-middle interception, or SIM swapping pretexts
  • Whether your helpdesk workflows can be manipulated — using publicly available information about your organization that an attacker would actually have
  • Whether physical access is achievable — independently of any digital vector
  • How far lateral movement could realistically go — from the initial foothold to something that would matter to your board

The methodology is informed by HUMINT tradecraft — scenarios are designed from the attacker's perspective first, working backward to what's testable and useful for your team. 

The point isn't to find out who failed. It's to find what the actual impact is going to be if something bad happens.

How Does OSec Test the Human Attack Surface?

We build red team engagements around how your organization actually operates. That starts with reconnaissance — the same kind an attacker would do before making contact.

A Human Vector engagement with OSec often includes:

  • Spear phishing and vishing campaigns scoped to your environment.
  • Pretexting scenarios against your helpdesk and vendor relationships, designed to test whether process weaknesses can be exploited with nothing more than plausible information
  • Physical access attempts, if it’s part of your scope — tailgating, badge approaches, delivery pretexts
  • Credential exposure mapping — breach database checks, password reuse analysis, overprivileged account identification
  • Full attack chain documentation — not a list of who clicked, but a map of how findings connect and what an attacker would actually do with them

In a recent engagement with a major banking institution, our assessment started with nothing more than a visitor's WiFi connection — the same starting point available to anyone walking through the front door. What followed was a cascade of findings that connected digital vulnerabilities to physical access gaps, ultimately demonstrating how far an attacker could go from a lobby laptop. Read the full case study.

We're CREST-accredited, with 14+ years of experience and 240,000+ hours tested across financial services, enterprise, government, and critical infrastructure. Our methodology has been proven at organizations including Walmart, Disney, Morgan Stanley, and the NHS.

When the engagement is done, we retest to confirm findings are closed. No extra charge. No extra pitch.

We've built scenarios in environments where failure wasn't theoretical. That shapes how we work.

What Does OSec’s Human Vector Audit Deliver?

Most security testing ends with a PDF. A list of findings, severity ratings, and recommendations that sit in a folder until the next audit cycle.

A Human Vector audit ends differently.

You get behavioral data mapped to real attack paths — not a click-rate dashboard or a medium-severity finding with no context. You get a clear picture of what was exploitable, how far it goes, and what to fix first — prioritized by what an attacker would actually chain, not by CVSS score.

For the CISO who needs to communicate risk to a board: the output is designed to show the business-level impact of human-layer exposure, not just the technical details. What could have been accessed. What process was manipulated. What the realistic downstream consequence looks like.

For the security team that has to act on it: findings come with fewer, clearer "do this next" items. Not a hundred medium findings that compete for attention. The paths that matter, in priority order, with retesting to confirm they're closed.

For organizations that want to go further — continuous human-layer validation rather than a point-in-time engagement — OSec's Incenter platform extends this coverage into ongoing testing across your full attack surface.

One engagement is a snapshot. The organizations that stay ahead make it a habit.

TL;DR: Key Takeaways

  • Phishing simulations measure awareness; social engineering testing measures exploitability. Those are different questions, and only one of them tells you what an attacker can actually do.
  • The human attack surface spans three dimensions — physical, technical, and people — and a meaningful test needs to cover all three.
  • A Human Vector audit maps the full chain: who clicked, what access that unlocks, whether MFA can be bypassed, whether helpdesk workflows hold up, and how far an attacker could realistically move.
  • Findings connect to real attack paths, prioritized by what an attacker would actually chain — not by severity rating.
  • Retesting is included. You'll know when it's closed.

Frequently Asked Questions

What is a Human Vector audit? A Human Vector audit is a structured security engagement that tests an organization's full human attack surface — spanning physical, technical, and people-based vectors — using adversary scenarios tailored to how the organization actually operates. It goes beyond standard phishing simulations to trace complete attack chains: from initial click or physical access through credential use, process manipulation, and lateral movement potential.

How is a Human Vector audit different from a phishing simulation? A phishing simulation sends employees a test email and measures how many click. A Human Vector audit tests whether your people, processes, and physical environment can be exploited under realistic adversary conditions — including vishing, pretexting, physical access attempts, and helpdesk manipulation. It answers a different question: not whether employees recognized a suspicious email, but how far an attacker could actually get.

What does social engineering testing include? Social engineering testing simulates real attacker behavior against an organization's people and processes. Depending on scope, this can include spear phishing campaigns, vishing calls, pretexting against helpdesk and vendor relationships, physical access attempts, and credential exposure mapping. A well-scoped engagement is built around your organization specifically — not a generic template applied to your domain.

How do you measure phishing risk beyond click rates? Click rates measure whether employees recognized a simulated attack. Phishing risk is better measured by what happens after a click: whether harvested credentials can access real systems, whether MFA can be circumvented, how far an attacker could move laterally from the initial access, and whether helpdesk and vendor processes can be manipulated independently of email. A Human Vector audit produces this richer picture.

What's the difference between a red team exercise and a social engineering test? A social engineering test focuses on the human and process layer — phishing, vishing, pretexting, physical access. A red team exercise simulates a full adversary operation: whatever combination of human-layer access, technical exploitation, and physical vectors an attacker would realistically use to achieve a specific objective. OSec's Human Vector audit can function as a standalone social engineering engagement or as a component of a broader red team operation, depending on scope.

How often should organizations run social engineering testing? At minimum, annually — and whenever there are significant changes to your organization: new vendors onboarded, major workforce changes, new office locations, or shifts in your helpdesk and access workflows. Human attack surfaces change when organizations change. A test from 18 months ago doesn't tell you where you stand today.

 

Want to know where your human attack surface actually breaks? Request a Human Vector audit and find out before someone else does.

Talk to the OSec team →

Share This Insight: