A wire request lands in your inbox. Everything looks right—the name, the tone, even a voice note that sounds exactly like your CEO. In the past, that was enough. Today, it’s a liability. This episode breaks down a hard truth: trust based on recognition is no longer safe. We’re no longer dealing with crude phishing attempts—we’re facing believable authority powered by AI. Traditional controls like SPF, DKIM, and DMARC still matter, but they only validate the path of a message, not the person behind it. And that gap is exactly where deepfake Business Email Compromise thrives. If your organization still trusts email signals to authorize high-risk actions, you’re already exposed.
THE EMAIL HEADER IS NO LONGER A TRUST SIGNAL
For years, we relied on familiar cues—display names, domains, writing styles—to make quick trust decisions. But AI has erased the old tells. Attackers can now generate flawless messages, mimic executive tone, and align perfectly with real business context. Emails don’t need to look suspicious anymore—they just need to feel familiar for a moment. And sometimes, they’re not even spoofed. They come from real accounts, through trusted SaaS platforms, passing every technical check. That’s the dangerous shift: your security stack sees a valid message, your team sees a believable request—but neither answers the only question that matters—should this action be allowed?
WHAT EMAIL SECURITY PROVES—AND WHAT IT NEVER COULD
Mail authentication validates infrastructure, not intent. SPF confirms sending servers, DKIM ensures message integrity, and DMARC aligns policies—but none of them verify human authority. A perfectly authenticated email can still carry a fraudulent request. That’s not a failure of the tools—it’s a misuse of them. We’ve been asking email security to solve a problem it was never designed to handle. And now, with deepfake voice, cloned writing styles, and AI-driven social engineering, the illusion of legitimacy is stronger than ever. Teams confuse polished communication with real authority—and that’s exactly where attacks succeed.
THE SHIFT: FROM TRUSTING MESSAGES TO VERIFYING ACTIONS
The old model let email carry trust into workflows. The new model demands proof before any action is taken. This is the essence of Zero Trust applied to business processes. Instead of asking “Did this come from a trusted source?”, we must ask, “Can this person prove they have the authority for this decision right now?” That shift moves security from the inbox to the moment of consequence—where money moves, access changes, and critical decisions happen.
ENTRA VERIFIED ID: CHANGING THE UNIT OF TRUST
This is where Microsoft Entra Verified ID transforms the model. Instead of relying on messages, organizations issue verifiable credentials—cryptographically signed proof of identity and authority. These credentials are held by users and presented when required. The system includes three roles: issuer, holder, and verifier. Trust is no longer assumed—it’s requested, presented, and validated. With decentralized identifiers (DIDs) and cryptographic verification, workflows can confirm not just who someone is, but what they are authorized to do. This is a fundamental shift—from identity as recognition to identity as proof.
FROM IDENTITY TO AUTHORITY: THE CRITICAL DESIGN CHANGE
Most organizations get this wrong by stopping at “verified employee.” But identity alone doesn’t stop fraud—authority does. A credential must reflect real business permissions: who can approve payments, who can change vendor data, who can reset executive access. These claims must be precise, enforceable, and tied directly to workflows. Narrow credentials are stronger, easier to govern, and faster to revoke. Because authority changes faster than identity—and stale authority is a hidden risk.
WHERE VERIFIED ID FITS IN A REAL BEC DEFENSE MODEL
Verified ID doesn’t replace your existing controls—it strengthens the point where they fail. Email filtering, MFA, and monitoring reduce noise, but they don’t stop high-quality attacks. Verified ID operates at the moment of decision. An email can trigger a workflow, but it cannot complete it without proof. No credential, no action. This moves trust out of human interpretation and into enforceable, cryptographic validation inside your business systems—finance apps, service desks, and approval workflows.
IMPLEMENTATION: START SMALL, PROVE CONTROL, SCALE FAST
You don’t need a massive transformation to begin. Start with one high-risk workflow—treasury approvals or executive account recovery. Map where trust is assumed and where actions are executed. Insert verification at the decision point. Measure impact: did it block risky actions, how did it affect speed, and where did users struggle? Expect friction, plan for exceptions, and keep fallback paths strict. Then scale by repeating the pattern—not by expanding scope blindly, but by reinforcing control where it matters most.
WHAT LEADERS NEED TO CHANGE NOW
Business Email Compromise is no longer just an email problem—it’s a business process failure. Leaders must ask: which decisions still rely on email trust? Who can actually prove their authority? Where can value move without verification? The answer to those questions defines your real risk posture. The new standard is simple and non-negotiable: no high-risk action without proof of authority.
CONCLUSION: REPLACE RECOGNITION WITH PROOF
Deepfake attacks succeed because we still trust what we recognize. But recognition can be faked. Authority cannot—if it’s verified properly. The trust model has already failed. The only question is how fast you replace it. If this episode changed how you think about security, follow Mirko Peters on LinkedIn and leave a review on Apple Podcasts. And tell us—what topic should we break down next?
Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365--6704921/support.
🚀 Want to be part of m365.fm?
Then stop just listening… and start showing up.
👉 Connect with me on LinkedIn and let’s make something happen:
- 🎙️ Be a podcast guest and share your story
- 🎧 Host your own episode (yes, seriously)
- 💡 Pitch topics the community actually wants to hear
- 🌍 Build your personal brand in the Microsoft 365 space
This isn’t just a podcast — it’s a platform for people who take action.
🔥 Most people wait. The best ones don’t.
👉 Connect with me on LinkedIn and send me a message:
"I want in"
Let’s build something awesome 👊
00:00:00,000 --> 00:00:01,680
A wire request lands in your inbox.
2
00:00:01,680 --> 00:00:03,360
The name looks right, the tone feels right.
3
00:00:03,360 --> 00:00:04,960
Maybe there is even a voice note attached
4
00:00:04,960 --> 00:00:06,520
that sounds exactly like your CEO.
5
00:00:06,520 --> 00:00:08,800
In the past, that felt like enough to hit send,
6
00:00:08,800 --> 00:00:10,880
but in reality, it isn't anymore.
7
00:00:10,880 --> 00:00:13,200
The problem we are facing isn't just simple spoofing.
8
00:00:13,200 --> 00:00:14,800
It is believable authority.
9
00:00:14,800 --> 00:00:17,360
Tools like SPF, D-Cyme, and DMARC
10
00:00:17,360 --> 00:00:19,920
help confirm the path an email took to get to you,
11
00:00:19,920 --> 00:00:21,800
but they cannot confirm the actual person
12
00:00:21,800 --> 00:00:22,640
behind the decision.
13
00:00:22,640 --> 00:00:24,840
That specific gap is where deepfake business email
14
00:00:24,840 --> 00:00:25,960
compromise lives.
15
00:00:25,960 --> 00:00:27,720
It slips right between authentic delivery
16
00:00:27,720 --> 00:00:29,120
and authentic authority.
17
00:00:29,120 --> 00:00:31,040
The shift we need is simple, but it's massive.
18
00:00:31,040 --> 00:00:34,080
We have to stop trusting signals that people can easily copy
19
00:00:34,080 --> 00:00:36,600
and start requiring proof before any action is taken.
20
00:00:36,600 --> 00:00:39,200
If you want more Microsoft security strategy like this,
21
00:00:39,200 --> 00:00:40,600
make sure to subscribe.
22
00:00:40,600 --> 00:00:42,520
Because the first thing that breaks in this new world
23
00:00:42,520 --> 00:00:44,360
is the email header itself.
24
00:00:44,360 --> 00:00:46,600
The email header is no longer a trust signal.
25
00:00:46,600 --> 00:00:48,720
For a long time, we treated the email header
26
00:00:48,720 --> 00:00:49,920
like a shortcut for trust.
27
00:00:49,920 --> 00:00:51,680
You see the display name, you see the domain,
28
00:00:51,680 --> 00:00:53,760
maybe you see an old thread or a writing style
29
00:00:53,760 --> 00:00:55,160
that matches what you expect.
30
00:00:55,160 --> 00:00:56,680
Your brain fills in the rest of the blanks.
31
00:00:56,680 --> 00:00:58,360
You don't actually verify the sender.
32
00:00:58,360 --> 00:00:59,520
You just recognize them.
33
00:00:59,520 --> 00:01:02,440
Recognition used to be enough because most attacks were rough
34
00:01:02,440 --> 00:01:03,480
and easy to spot.
35
00:01:03,480 --> 00:01:06,680
They were full of bad grammar, we had timing and obvious pressure.
36
00:01:06,680 --> 00:01:08,080
But that model is gone.
37
00:01:08,080 --> 00:01:11,080
AI has raised the floor for every attacker on the planet.
38
00:01:11,080 --> 00:01:12,880
They no longer need strong writing skills
39
00:01:12,880 --> 00:01:15,440
or an insider tone to get past your defenses.
40
00:01:15,440 --> 00:01:17,680
They can generate clean language, mimic the rhythm
41
00:01:17,680 --> 00:01:20,560
of a specific executive and reference current projects
42
00:01:20,560 --> 00:01:23,120
to shape a message around real business context.
43
00:01:23,120 --> 00:01:25,440
What shows up in your inbox doesn't need to look suspicious.
44
00:01:25,440 --> 00:01:27,640
It just needs to feel familiar for a few seconds.
45
00:01:27,640 --> 00:01:30,760
This matters because deep fake back isn't just about fake domains.
46
00:01:30,760 --> 00:01:32,240
Sometimes the domain is real.
47
00:01:32,240 --> 00:01:33,920
Sometimes the account itself is real.
48
00:01:33,920 --> 00:01:35,960
The email might come through a trusted SaaS path
49
00:01:35,960 --> 00:01:38,520
that already fits your company's normal sending pattern.
50
00:01:38,520 --> 00:01:41,440
When that happens, your security stack sees a valid route
51
00:01:41,440 --> 00:01:43,640
while your finance team sees a believable request.
52
00:01:43,640 --> 00:01:45,920
Neither of those checks answers the real question.
53
00:01:45,920 --> 00:01:48,280
Should this person be allowed to trigger this action?
54
00:01:48,280 --> 00:01:51,360
SPF can stop some direct spoofing, and that still has value.
55
00:01:51,360 --> 00:01:53,560
But SPF only tells you if a server is allowed
56
00:01:53,560 --> 00:01:55,600
to send on behalf of a domain.
57
00:01:55,600 --> 00:01:57,600
If an attacker gets into a legitimate mailbox
58
00:01:57,600 --> 00:02:00,320
or uses an approved service to send from the right environment,
59
00:02:00,320 --> 00:02:01,600
SPF won't help you.
60
00:02:01,600 --> 00:02:03,680
The message will still look perfectly clean.
61
00:02:03,680 --> 00:02:05,600
DKIM helps with integrity by telling you
62
00:02:05,600 --> 00:02:07,640
the message wasn't changed while it was moving.
63
00:02:07,640 --> 00:02:09,120
That is useful, but it's narrow.
64
00:02:09,120 --> 00:02:11,360
A fraudulent approval request can keep its fraud
65
00:02:11,360 --> 00:02:13,280
all the way from the sender to the recipient
66
00:02:13,280 --> 00:02:14,640
with perfect integrity.
67
00:02:14,640 --> 00:02:16,600
Nothing in DKIM tells you if the decision
68
00:02:16,600 --> 00:02:18,080
inside that message is real.
69
00:02:18,080 --> 00:02:19,400
DKIMar can prove the picture when you
70
00:02:19,400 --> 00:02:21,360
enforce it at quarantine or reject.
71
00:02:21,360 --> 00:02:23,200
The problem is that many domains still
72
00:02:23,200 --> 00:02:26,040
sit at monitoring only, which means they collect reports
73
00:02:26,040 --> 00:02:27,400
without actually blocking anything.
74
00:02:27,400 --> 00:02:29,480
Even a strong demarc policy doesn't
75
00:02:29,480 --> 00:02:31,400
solve compromised accounts or trust
76
00:02:31,400 --> 00:02:33,000
vendors with weak controls.
77
00:02:33,000 --> 00:02:34,520
What is actually happening is this.
78
00:02:34,520 --> 00:02:36,720
We built our trust around mail characteristics,
79
00:02:36,720 --> 00:02:39,440
but the attacker no longer needs to break the mail flow to win.
80
00:02:39,440 --> 00:02:40,720
They just need to occupy it.
81
00:02:40,720 --> 00:02:42,360
Once the language is polished and the timing
82
00:02:42,360 --> 00:02:44,000
fits the business moment, the header
83
00:02:44,000 --> 00:02:45,200
stops being a control.
84
00:02:45,200 --> 00:02:46,040
It becomes a hint.
85
00:02:46,040 --> 00:02:47,800
The hint is too weak for payment approvals
86
00:02:47,800 --> 00:02:48,880
or vendor banking changes.
87
00:02:48,880 --> 00:02:50,520
Before we talk about stronger controls,
88
00:02:50,520 --> 00:02:51,920
we need to be precise.
89
00:02:51,920 --> 00:02:53,800
What exactly do these email checks prove?
90
00:02:53,800 --> 00:02:57,240
And what were they never designed to prove in the first place?
91
00:02:57,240 --> 00:02:58,600
What email security proves?
92
00:02:58,600 --> 00:02:59,720
And what it never could?
93
00:02:59,720 --> 00:03:01,440
We need to draw a very clear line here,
94
00:03:01,440 --> 00:03:03,680
because this is the exact point where most security
95
00:03:03,680 --> 00:03:05,440
conversations start to get blurry.
96
00:03:05,440 --> 00:03:08,280
Mail authentication proves that a server was authorized
97
00:03:08,280 --> 00:03:11,040
to send a message, and it proves how that message was handled
98
00:03:11,040 --> 00:03:12,080
during transport.
99
00:03:12,080 --> 00:03:14,680
But it does not prove that the sender has actual authority,
100
00:03:14,680 --> 00:03:15,840
and it definitely doesn't prove
101
00:03:15,840 --> 00:03:17,120
that the request inside that email
102
00:03:17,120 --> 00:03:18,800
should be trusted by your business.
103
00:03:18,800 --> 00:03:21,280
That distinction might sound like a small detail
104
00:03:21,280 --> 00:03:24,200
until you actually map it to a real world approval process.
105
00:03:24,200 --> 00:03:27,640
A message can pass every check from SPF and DKI M
106
00:03:27,640 --> 00:03:30,520
to demarc alignment, and it can still be a blatant request
107
00:03:30,520 --> 00:03:31,040
for fraud.
108
00:03:31,040 --> 00:03:33,120
This doesn't happen because your security controls
109
00:03:33,120 --> 00:03:36,320
failed at their jobs, but because we keep asking those tools
110
00:03:36,320 --> 00:03:38,800
to answer a question they were never built to handle.
111
00:03:38,800 --> 00:03:40,400
They validate whether the mail systems
112
00:03:40,400 --> 00:03:42,840
behaved as they were expected to, but they cannot validate
113
00:03:42,840 --> 00:03:44,680
whether the human asking for a payment
114
00:03:44,680 --> 00:03:46,360
has the right to make that request.
115
00:03:46,360 --> 00:03:48,400
And one level deeper, the rise of deepfakes
116
00:03:48,400 --> 00:03:50,880
is making this gap wider every single day.
117
00:03:50,880 --> 00:03:53,200
An email can arrive with a perfectly polished message
118
00:03:53,200 --> 00:03:55,480
and a cloned writing style, and it might even
119
00:03:55,480 --> 00:03:58,160
include a short audio clip or a follow-up call
120
00:03:58,160 --> 00:04:00,880
that sounds familiar enough to kill any doubt.
121
00:04:00,880 --> 00:04:02,760
Even when the channel checks come back clean,
122
00:04:02,760 --> 00:04:04,800
the social proof inside that interaction
123
00:04:04,800 --> 00:04:08,360
feels much stronger, and that is exactly where teams get trapped.
124
00:04:08,360 --> 00:04:11,960
They confuse authentic formatting with authentic authority.
125
00:04:11,960 --> 00:04:13,760
And there are other cracks in the foundation, too.
126
00:04:13,760 --> 00:04:15,600
Unicode lookalike domains still trick people
127
00:04:15,600 --> 00:04:18,480
when they are reading quickly, and a proofed SAS or API sending
128
00:04:18,480 --> 00:04:20,920
paths can carry malicious messages that fit right
129
00:04:20,920 --> 00:04:22,400
into your normal patterns.
130
00:04:22,400 --> 00:04:24,720
Support teams might see the right brand, the right signature
131
00:04:24,720 --> 00:04:26,720
block, and the right timing, but they still end up
132
00:04:26,720 --> 00:04:28,120
approving the wrong action.
133
00:04:28,120 --> 00:04:29,760
The problem isn't just about technical evasion,
134
00:04:29,760 --> 00:04:31,560
but the fact that business decisions still depend
135
00:04:31,560 --> 00:04:33,400
on cues that can be copied perfectly.
136
00:04:33,400 --> 00:04:34,880
A lot of companies try to fix this
137
00:04:34,880 --> 00:04:36,800
without of band verification.
138
00:04:36,800 --> 00:04:38,440
You call the person you open a chat on teams
139
00:04:38,440 --> 00:04:40,880
or you try to confirm the request through a different route
140
00:04:40,880 --> 00:04:43,280
entirely that does help, and in many cases,
141
00:04:43,280 --> 00:04:46,080
it is still one of the best controls we have available.
142
00:04:46,080 --> 00:04:47,800
But it has a hard limit because it depends
143
00:04:47,800 --> 00:04:49,960
on people slowing down at the exact moment
144
00:04:49,960 --> 00:04:51,720
when the pressure is at its highest.
145
00:04:51,720 --> 00:04:54,640
Whether it is an urgent payment, an executive escalation,
146
00:04:54,640 --> 00:04:57,160
or an end of quarter close, the process always
147
00:04:57,160 --> 00:04:59,800
tends to weaken the moment the risk goes up.
148
00:04:59,800 --> 00:05:02,880
That is why the zero trust view is so important in this context.
149
00:05:02,880 --> 00:05:04,440
You have to stop at verifying the channel
150
00:05:04,440 --> 00:05:06,360
and start verifying the actor and the action.
151
00:05:06,360 --> 00:05:07,960
You have to ask a different question.
152
00:05:07,960 --> 00:05:10,960
Instead of asking if a request came through a trusted path,
153
00:05:10,960 --> 00:05:13,640
you should ask if this person can prove they hold the authority,
154
00:05:13,640 --> 00:05:17,080
this workflow requires right now for this specific decision.
155
00:05:17,080 --> 00:05:18,840
That is a completely different model.
156
00:05:18,840 --> 00:05:20,840
In the old model, the email carries the trust
157
00:05:20,840 --> 00:05:22,040
into the workflow.
158
00:05:22,040 --> 00:05:24,320
In the new model, the workflow demands proof
159
00:05:24,320 --> 00:05:26,680
before that trust is allowed to move any further.
160
00:05:26,680 --> 00:05:28,800
The message can still start the process
161
00:05:28,800 --> 00:05:30,960
by notifying or requesting a review,
162
00:05:30,960 --> 00:05:34,040
but it no longer has the power to authorize anything on its own.
163
00:05:34,040 --> 00:05:35,600
And this is the shift we are no longer
164
00:05:35,600 --> 00:05:37,400
talking about better spam filtering
165
00:05:37,400 --> 00:05:39,600
or looking closer at header inspections.
166
00:05:39,600 --> 00:05:41,440
We are moving into identity architecture
167
00:05:41,440 --> 00:05:43,880
where the control point sits at the decision itself
168
00:05:43,880 --> 00:05:46,040
and where authority has to be proven in a way
169
00:05:46,040 --> 00:05:49,360
and attacker cannot fake by writing better text or cloning a voice.
170
00:05:49,360 --> 00:05:50,840
That changes the entire design space
171
00:05:50,840 --> 00:05:53,040
because once you stop trusting the message,
172
00:05:53,040 --> 00:05:55,880
you have to decide what is going to replace it.
173
00:05:55,880 --> 00:05:58,640
Entra Verified ID changes the unit of trust.
174
00:05:58,640 --> 00:06:00,760
So what replaces the message is not a smarter inbox,
175
00:06:00,760 --> 00:06:02,840
but a completely different kind of trust object.
176
00:06:02,840 --> 00:06:05,160
Entra Verified ID does not just give your users
177
00:06:05,160 --> 00:06:06,480
another login to manage.
178
00:06:06,480 --> 00:06:09,520
It allows an organization to issue a verifiable credential,
179
00:06:09,520 --> 00:06:12,480
which is a signed piece of proof about a person or an entity
180
00:06:12,480 --> 00:06:14,400
that can be presented and checked later.
181
00:06:14,400 --> 00:06:16,560
That matters because a login session only tells you
182
00:06:16,560 --> 00:06:18,240
that someone got into a system,
183
00:06:18,240 --> 00:06:20,560
but a credential tells you what they are actually allowed
184
00:06:20,560 --> 00:06:22,320
to claim within a business process.
185
00:06:22,320 --> 00:06:23,640
The model is simple on purpose.
186
00:06:23,640 --> 00:06:25,960
You have an issuer, a holder, and a verifier.
187
00:06:25,960 --> 00:06:27,680
The issuer creates the credential,
188
00:06:27,680 --> 00:06:29,240
which could be your own organization
189
00:06:29,240 --> 00:06:30,720
or an outside identity partner.
190
00:06:30,720 --> 00:06:33,000
The holder keeps that credential in a digital wallet
191
00:06:33,000 --> 00:06:34,520
and the verifier asks for proof
192
00:06:34,520 --> 00:06:36,760
whenever a high-risk action needs to be taken.
193
00:06:36,760 --> 00:06:38,520
These three roles have clear boundaries
194
00:06:38,520 --> 00:06:41,120
and each one is important because trust is no longer hidden
195
00:06:41,120 --> 00:06:42,400
inside the message.
196
00:06:42,400 --> 00:06:45,200
It is requested, presented, and checked directly.
197
00:06:45,200 --> 00:06:47,400
Under this model, the identifier behind the credential
198
00:06:47,400 --> 00:06:49,600
is a decentralized identifier or a DID.
199
00:06:49,600 --> 00:06:51,760
You can think of this as a stable reference point
200
00:06:51,760 --> 00:06:54,880
for the entity while the DID document is where a verifier
201
00:06:54,880 --> 00:06:56,840
can find the public keys and metadata
202
00:06:56,840 --> 00:06:58,400
needed to check signatures.
203
00:06:58,400 --> 00:07:00,160
The credential stays with the holder,
204
00:07:00,160 --> 00:07:02,120
the proof is verified cryptographically,
205
00:07:02,120 --> 00:07:04,360
and the verifier does not need to call the issuer
206
00:07:04,360 --> 00:07:06,600
every single time just to see if the proof is real.
207
00:07:06,600 --> 00:07:09,400
That changes the operating model in a very significant way.
208
00:07:09,400 --> 00:07:11,040
Instead of relying on a mail trace,
209
00:07:11,040 --> 00:07:13,160
a callback habit or a familiar signature block,
210
00:07:13,160 --> 00:07:14,760
the workflow can request a credential
211
00:07:14,760 --> 00:07:16,760
and validate it against published keys.
212
00:07:16,760 --> 00:07:19,440
If the proof checks out, the verifier can trust the claim
213
00:07:19,440 --> 00:07:21,720
because the cryptography matches perfectly.
214
00:07:21,720 --> 00:07:24,120
If it does not, the workflow stops immediately
215
00:07:24,120 --> 00:07:26,280
and the trust decision becomes a technical fact
216
00:07:26,280 --> 00:07:27,840
rather than an interpretation.
217
00:07:27,840 --> 00:07:29,360
The claims inside that credential
218
00:07:29,360 --> 00:07:31,360
can go way beyond basic identity.
219
00:07:31,360 --> 00:07:33,520
This is where the design becomes incredibly useful
220
00:07:33,520 --> 00:07:35,640
for fighting business email compromise.
221
00:07:35,640 --> 00:07:37,680
You are not limited to a first name, a last name,
222
00:07:37,680 --> 00:07:39,040
or an employee number.
223
00:07:39,040 --> 00:07:41,600
You can express a specific role or a specific level
224
00:07:41,600 --> 00:07:44,040
of authority, you can identify a treasury approver,
225
00:07:44,040 --> 00:07:46,720
an executive payment signer, or an approved vendor banking
226
00:07:46,720 --> 00:07:47,480
contact.
227
00:07:47,480 --> 00:07:49,720
These are business claims, not just facts from a directory,
228
00:07:49,720 --> 00:07:52,600
and that is exactly what high-risk workflows need to see.
229
00:07:52,600 --> 00:07:54,640
The question is no longer whether an email looks like it
230
00:07:54,640 --> 00:07:55,600
came from the CFO.
231
00:07:55,600 --> 00:07:58,280
The question becomes whether the person behind the request
232
00:07:58,280 --> 00:08:00,280
can present a valid proof that they currently
233
00:08:00,280 --> 00:08:02,920
hold payment authority for this specific action.
234
00:08:02,920 --> 00:08:05,200
That is a much tighter check that maps directly
235
00:08:05,200 --> 00:08:06,520
to the decision itself.
236
00:08:06,520 --> 00:08:08,280
There is another benefit here as well.
237
00:08:08,280 --> 00:08:11,400
Verified ID supports something called selective disclosure,
238
00:08:11,400 --> 00:08:13,840
which means the verifier does not need the entire credential
239
00:08:13,840 --> 00:08:16,440
if the process only requires one specific claim.
240
00:08:16,440 --> 00:08:18,800
If a workflow needs proof that someone is an authorized
241
00:08:18,800 --> 00:08:21,080
approver above a certain dollar threshold,
242
00:08:21,080 --> 00:08:23,960
it does not need to see every other detail about that person.
243
00:08:23,960 --> 00:08:27,320
You share less, you prove enough, and you keep the proof narrow.
244
00:08:27,320 --> 00:08:29,480
That makes the entire trust model much cleaner.
245
00:08:29,480 --> 00:08:31,600
The old model asked people to guess authority based
246
00:08:31,600 --> 00:08:33,960
on the context, but the new model asked systems
247
00:08:33,960 --> 00:08:36,520
to verify authority using signed claims.
248
00:08:36,520 --> 00:08:38,080
One depends on recognition?
249
00:08:38,080 --> 00:08:39,880
Well, the other depends on hard evidence.
250
00:08:39,880 --> 00:08:42,680
Once authority becomes a credential instead of an assumption,
251
00:08:42,680 --> 00:08:44,560
your whole security design starts to shift.
252
00:08:44,560 --> 00:08:46,160
You stop spending all of your energy trying
253
00:08:46,160 --> 00:08:48,680
to decide which messages look suspicious enough to block
254
00:08:48,680 --> 00:08:50,360
because the message is no longer the place
255
00:08:50,360 --> 00:08:52,000
where final trust is granted.
256
00:08:52,000 --> 00:08:54,240
The real control moves to the action point, where
257
00:08:54,240 --> 00:08:57,320
the money moves, the access changes, or the records get updated.
258
00:08:57,320 --> 00:08:58,600
That is the shift in units.
259
00:08:58,600 --> 00:09:00,480
You move from the message to the actor
260
00:09:00,480 --> 00:09:03,200
and from the appearance of the sender to their signed authority.
261
00:09:03,200 --> 00:09:06,160
You move from communication security to decision security.
262
00:09:06,160 --> 00:09:09,280
Once you see that, the next design question becomes very practical.
263
00:09:09,280 --> 00:09:11,520
You have to figure out where verified ID actually
264
00:09:11,520 --> 00:09:14,680
sits inside a real defense model alongside the controls
265
00:09:14,680 --> 00:09:16,520
you already have in place.
266
00:09:16,520 --> 00:09:19,520
Where verified ID fits in a real back defense model?
267
00:09:19,520 --> 00:09:21,960
So where does this actually sit in your security stack
268
00:09:21,960 --> 00:09:24,400
without turning into just another identity side project
269
00:09:24,400 --> 00:09:25,920
that nobody ever uses?
270
00:09:25,920 --> 00:09:27,720
We have to start with the obvious reality.
271
00:09:27,720 --> 00:09:31,240
You aren't getting rid of SPF, DKIM, or DMARC anytime soon.
272
00:09:31,240 --> 00:09:32,880
You still need phishing-resistant MFA,
273
00:09:32,880 --> 00:09:34,640
and you still need your approval thresholds,
274
00:09:34,640 --> 00:09:36,720
your separation of duties and your hard processes
275
00:09:36,720 --> 00:09:38,200
around how money moves.
276
00:09:38,200 --> 00:09:40,920
None of that goes away because cheap spoofing is still a thing
277
00:09:40,920 --> 00:09:43,000
and stolen sessions are still a massive problem.
278
00:09:43,000 --> 00:09:45,120
Baseline hygiene is there to cut the noise,
279
00:09:45,120 --> 00:09:47,480
and reducing that noise is what keeps your teams
280
00:09:47,480 --> 00:09:49,360
from burning hours on low-grade fraud
281
00:09:49,360 --> 00:09:52,600
while they accidentally missed the one request that actually hurts.
282
00:09:52,600 --> 00:09:55,240
But verified ID belongs at a completely different layer.
283
00:09:55,240 --> 00:09:58,080
It doesn't live in the inbox, it lives at the moment of consequence.
284
00:09:58,080 --> 00:10:01,080
That is exactly where most security programs still break today.
285
00:10:01,080 --> 00:10:03,920
They spend a fortune on detection at the edge of the network,
286
00:10:03,920 --> 00:10:06,840
but then they let the risky action itself rely on recognition,
287
00:10:06,840 --> 00:10:09,040
speed, or a weak callback habit.
288
00:10:09,040 --> 00:10:12,120
The email gets scanned by a filter, the account gets a risk score,
289
00:10:12,120 --> 00:10:13,960
the link gets inspected by a sandbox.
290
00:10:13,960 --> 00:10:15,840
And then, after all that technology,
291
00:10:15,840 --> 00:10:18,680
the actual approval depends on whether a human being believes
292
00:10:18,680 --> 00:10:20,840
the request feels normal enough to be real.
293
00:10:20,840 --> 00:10:24,600
That is the exact gap that Deepfake Beck is designed to exploit.
294
00:10:24,600 --> 00:10:26,200
A better pattern looks like this.
295
00:10:26,200 --> 00:10:29,080
The email arrives in the inbox, maybe it looks perfectly clean
296
00:10:29,080 --> 00:10:30,960
or maybe it raises a few red flags.
297
00:10:30,960 --> 00:10:33,440
Either way, that message can trigger a workflow,
298
00:10:33,440 --> 00:10:36,080
but the workflow is not allowed to release any value yet.
299
00:10:36,080 --> 00:10:37,720
Before the wire transfer is approved,
300
00:10:37,720 --> 00:10:39,520
before the bank account details are changed,
301
00:10:39,520 --> 00:10:42,000
or before the help desk resets an executive account,
302
00:10:42,000 --> 00:10:44,480
the requester has to present a valid credential.
303
00:10:44,480 --> 00:10:47,880
They have to prove they have the authority required for that specific action.
304
00:10:47,880 --> 00:10:49,800
If there is no proof, there is no release.
305
00:10:49,800 --> 00:10:53,160
This design matters because it stops requiring the message to be perfect
306
00:10:53,160 --> 00:10:55,600
or the analyst to catch every single detail.
307
00:10:55,600 --> 00:10:58,920
The message can start the process, but it is physically unable to finish it.
308
00:10:58,920 --> 00:11:02,160
Authority moves out of the inbox and into the control point
309
00:11:02,160 --> 00:11:04,160
where the business decision is actually enforced.
310
00:11:04,160 --> 00:11:07,680
In your internal scenarios, this usually means using workforce credentials
311
00:11:07,680 --> 00:11:09,160
issued by your own organization.
312
00:11:09,160 --> 00:11:11,680
You might have an executive approver, a treasury signer,
313
00:11:11,680 --> 00:11:13,600
or a service desk escalation sponsor.
314
00:11:13,600 --> 00:11:15,600
These work best when the trust boundaries are already known,
315
00:11:15,600 --> 00:11:17,920
and you want full control over the life cycle,
316
00:11:17,920 --> 00:11:20,920
including the ability to revoke access the second or all changes.
317
00:11:20,920 --> 00:11:23,040
That is one of the strongest reasons to use the model
318
00:11:23,040 --> 00:11:25,160
where the organization issues the ID.
319
00:11:25,160 --> 00:11:27,320
The same system that knows a person change jobs
320
00:11:27,320 --> 00:11:30,200
can instantly remove the authority behind their credential.
321
00:11:30,200 --> 00:11:32,360
External scenarios work a little differently.
322
00:11:32,360 --> 00:11:36,360
If a supplier needs to prove they are the approved contact for vendor banking changes
323
00:11:36,360 --> 00:11:40,120
or if a partner needs to verify their identity across company boundaries,
324
00:11:40,120 --> 00:11:42,280
portability becomes the priority.
325
00:11:42,280 --> 00:11:45,680
That is where partner issued or ID verified credentials make more sense
326
00:11:45,680 --> 00:11:49,680
because the party relying on the info needs proof that travels across organizations.
327
00:11:49,680 --> 00:11:52,240
You can't have every verify building a private trust arrangement
328
00:11:52,240 --> 00:11:53,920
from scratch for every single partner.
329
00:11:53,920 --> 00:11:56,800
And none of this becomes real until it connects to the systems
330
00:11:56,800 --> 00:11:58,640
that are already releasing value.
331
00:11:58,640 --> 00:12:01,880
We are talking about finance approval apps, service desk workflows,
332
00:12:01,880 --> 00:12:03,280
and vendor management tools.
333
00:12:03,280 --> 00:12:07,240
It might be a custom line of business app or even a simple power platform process.
334
00:12:07,240 --> 00:12:10,760
The credential request and the verification step have to live inside those parts
335
00:12:10,760 --> 00:12:14,440
because that is where trust turns into a payment and access right or a change.
336
00:12:14,440 --> 00:12:17,240
If you leave verified ID sitting in a demo portal,
337
00:12:17,240 --> 00:12:19,840
it stays interesting, but it stays irrelevant to the business.
338
00:12:19,840 --> 00:12:21,840
That also means it should connect with Entra
339
00:12:21,840 --> 00:12:25,000
and your broader policy layer instead of floating off on its own.
340
00:12:25,000 --> 00:12:27,560
Your conditional access identity governance role management
341
00:12:27,560 --> 00:12:30,120
and help desk control should all reinforce each other.
342
00:12:30,120 --> 00:12:31,960
One control checks the strength of the sign in.
343
00:12:31,960 --> 00:12:34,240
Another checks the device or the session context,
344
00:12:34,240 --> 00:12:37,720
verified ID then checks the authority for the specific decision being made.
345
00:12:37,720 --> 00:12:39,760
That combination is what makes the model strong.
346
00:12:39,760 --> 00:12:42,560
It isn't a silver bullet, but it provides tighter proof
347
00:12:42,560 --> 00:12:45,800
at the exact point where fraud wants the organization to act.
348
00:12:45,800 --> 00:12:47,880
So the architecture is simple to describe,
349
00:12:47,880 --> 00:12:49,880
even if the implementation takes some focus.
350
00:12:49,880 --> 00:12:52,560
You keep the email defenses, you keep the human process,
351
00:12:52,560 --> 00:12:55,200
you add stronger identity proof at the high risk actions
352
00:12:55,200 --> 00:12:57,760
than you make the business system enforce it.
353
00:12:57,760 --> 00:13:00,000
But there is one more part that decides whether this works
354
00:13:00,000 --> 00:13:02,000
or just turns into security theater.
355
00:13:02,000 --> 00:13:06,800
The authority inside that credential has to match the authority inside the real world process.
356
00:13:06,800 --> 00:13:11,200
If the business is still issuing broad vague claims like verified employee,
357
00:13:11,200 --> 00:13:15,880
the control might look modern, but it won't actually stop the fraud path you are worried about.
358
00:13:15,880 --> 00:13:19,200
Designing credentials around authority, not identity alone.
359
00:13:19,200 --> 00:13:22,200
This is the specific point where a lot of teams go wrong.
360
00:13:22,200 --> 00:13:25,600
They start by creating a generic credential like verified employee
361
00:13:25,600 --> 00:13:29,000
and they assume that stronger identity automatically solves the fraud problem.
362
00:13:29,000 --> 00:13:31,800
It doesn't. A payroll clerk can be a verified employee.
363
00:13:31,800 --> 00:13:34,000
A junior analyst can be a verified employee.
364
00:13:34,000 --> 00:13:36,000
Even a contractor can be a verified employee.
365
00:13:36,000 --> 00:13:38,800
None of those facts tell a workflow who is allowed to approve a wire
366
00:13:38,800 --> 00:13:40,600
who can change vendor banking data
367
00:13:40,600 --> 00:13:43,600
or who can authorize an emergency reset for a privileged account.
368
00:13:43,600 --> 00:13:45,600
That is where the model has to get much tighter.
369
00:13:45,600 --> 00:13:47,600
Identity tells you who the person is,
370
00:13:47,600 --> 00:13:50,200
but authority tells you what that person is allowed to do.
371
00:13:50,200 --> 00:13:51,400
Those are not the same thing.
372
00:13:51,400 --> 00:13:53,000
In most organizations today,
373
00:13:53,000 --> 00:13:56,000
those two ideas are mixed together inside email habits,
374
00:13:56,000 --> 00:13:58,400
manager expectations and tribal knowledge.
375
00:13:58,400 --> 00:14:02,200
People just know that a certain director usually signs off on certain payments.
376
00:14:02,200 --> 00:14:05,800
They know that a specific assistant sometimes relays approvals for the boss.
377
00:14:05,800 --> 00:14:08,000
They know that treasury calls a certain person
378
00:14:08,000 --> 00:14:09,200
when something looks a bit off.
379
00:14:09,200 --> 00:14:10,200
The business works,
380
00:14:10,200 --> 00:14:14,800
but a huge amount of the real authority sits in social patterns instead of enforceable controls.
381
00:14:14,800 --> 00:14:17,000
That is exactly what attackers are looking to exploit.
382
00:14:17,000 --> 00:14:19,600
So your credential design has to separate these claims clearly.
383
00:14:19,600 --> 00:14:21,200
One layer identifies the person,
384
00:14:21,200 --> 00:14:23,800
another layer states their specific decision right.
385
00:14:23,800 --> 00:14:25,200
And then where it's needed,
386
00:14:25,200 --> 00:14:27,800
a third layer sets the conditions around that right,
387
00:14:27,800 --> 00:14:31,600
like a dollar threshold, a department, a time limit, or a workflow type.
388
00:14:31,600 --> 00:14:33,400
Now the check becomes precise.
389
00:14:33,400 --> 00:14:36,200
It isn't just asking if this person is known to the company.
390
00:14:36,200 --> 00:14:39,400
It is asking if this person currently holds approval authority
391
00:14:39,400 --> 00:14:42,400
for this exact kind of action under these specific conditions.
392
00:14:42,400 --> 00:14:45,600
This usually leads to much narrower credential types than people expect to see.
393
00:14:45,600 --> 00:14:48,200
You don't want one broad executive credential
394
00:14:48,200 --> 00:14:51,000
or one all purpose finance credential, you want separate ones.
395
00:14:51,000 --> 00:14:54,000
You want a payment approval credential for amounts over a defined threshold.
396
00:14:54,000 --> 00:14:56,400
You want a vendor master data change authority.
397
00:14:56,400 --> 00:14:58,600
You want an emergency identity recovery sponsor.
398
00:14:58,600 --> 00:15:00,000
The narrower you make the claim,
399
00:15:00,000 --> 00:15:04,000
the easier it is to govern, revoke, and test broad authority sounds efficient
400
00:15:04,000 --> 00:15:07,200
until it becomes impossible to reason about during a live incident.
401
00:15:07,200 --> 00:15:09,800
And revocation matters more here than teams often realize
402
00:15:09,800 --> 00:15:12,600
because authority changes much faster than identity does.
403
00:15:12,600 --> 00:15:13,800
A person keeps their name,
404
00:15:13,800 --> 00:15:15,400
they keep their employee record,
405
00:15:15,400 --> 00:15:18,600
but their approval rights can disappear overnight because of a roll change,
406
00:15:18,600 --> 00:15:22,200
a leave of absence, a reorganization, or a control failure.
407
00:15:22,200 --> 00:15:25,200
If the credential model cannot respond to those changes quickly,
408
00:15:25,200 --> 00:15:27,600
then the system starts carrying stale trust.
409
00:15:27,600 --> 00:15:30,200
That stale trust becomes your new attack surface.
410
00:15:30,200 --> 00:15:32,600
This is why simple schema design is so important.
411
00:15:32,600 --> 00:15:34,600
Security teams usually want rich claims.
412
00:15:34,600 --> 00:15:36,200
Operations teams want low overhead.
413
00:15:36,200 --> 00:15:37,600
The business just wants speed.
414
00:15:37,600 --> 00:15:39,600
You need all three of those to coexist.
415
00:15:39,600 --> 00:15:42,800
If the credential schema turns into a giant complex policy document
416
00:15:42,800 --> 00:15:45,800
inside a JSON file, nobody is going to manage it well.
417
00:15:45,800 --> 00:15:48,600
Keep the claims limited to what the workflow can actually enforce.
418
00:15:48,600 --> 00:15:52,400
You need clear claim names, clear ownership, and a clear revocation path.
419
00:15:52,400 --> 00:15:53,600
Boring design is what wins here.
420
00:15:53,600 --> 00:15:56,600
There are cases where identity alone is still too weak,
421
00:15:56,600 --> 00:15:58,600
even when you have a strong authority claim.
422
00:15:58,600 --> 00:16:00,400
Think about sensitive recovery flows,
423
00:16:00,400 --> 00:16:03,200
external onboarding, or very high value approvals.
424
00:16:03,200 --> 00:16:05,200
In those moments, you might need stronger proof
425
00:16:05,200 --> 00:16:08,200
that the person holding the credential is the real person it was issued to.
426
00:16:08,200 --> 00:16:12,000
That is where document verification or a face check can raise the level of assurance.
427
00:16:12,000 --> 00:16:13,800
You don't do this for every decision.
428
00:16:13,800 --> 00:16:15,600
You only do it where the business impact
429
00:16:15,600 --> 00:16:19,200
justifies the extra friction because friction is the other trap you have to avoid.
430
00:16:19,200 --> 00:16:22,200
If every single approval turns into a multi-step ceremony,
431
00:16:22,200 --> 00:16:24,200
people will immediately look for shortcuts.
432
00:16:24,200 --> 00:16:26,200
They will delegate around the system.
433
00:16:26,200 --> 00:16:28,200
They will call support for a workaround.
434
00:16:28,200 --> 00:16:29,600
They will ask for exceptions,
435
00:16:29,600 --> 00:16:31,800
or they will push the work into side channels.
436
00:16:31,800 --> 00:16:34,200
When that happens, the control loses all its force.
437
00:16:34,200 --> 00:16:37,000
Strong proof should only show up where the potential loss is high
438
00:16:37,000 --> 00:16:39,800
and the frequency is low enough to support the extra steps.
439
00:16:39,800 --> 00:16:42,800
Reserve the heaviest checks for the moments that actually need them.
440
00:16:42,800 --> 00:16:44,400
So the design rule is simple.
441
00:16:44,400 --> 00:16:47,600
Do not issue credentials that merely confirm someone is employed.
442
00:16:47,600 --> 00:16:51,000
Issue credentials that map directly to authority in the real business process.
443
00:16:51,000 --> 00:16:53,400
Keep those credentials narrow, revoke them fast,
444
00:16:53,400 --> 00:16:56,200
and only increase assurance where the risk truly demands it.
445
00:16:56,200 --> 00:16:57,800
Now that sounds very clean on paper.
446
00:16:57,800 --> 00:17:01,000
The harder part starts when you try to fit all of this into the systems,
447
00:17:01,000 --> 00:17:04,400
the habits and the weird edge cases that are already running the business today.
448
00:17:04,400 --> 00:17:09,000
Implementation path starts small, proof control, then scale.
449
00:17:09,000 --> 00:17:12,600
How do you actually start this without getting stuck in a massive identity program
450
00:17:12,600 --> 00:17:13,800
that stalls for a year?
451
00:17:13,800 --> 00:17:15,400
The answer is to start small.
452
00:17:15,400 --> 00:17:17,600
Find one workflow where the risk is obvious
453
00:17:17,600 --> 00:17:20,000
and the approval path is narrow enough to actually manage.
454
00:17:20,000 --> 00:17:22,200
Treasury is usually the cleanest place to begin.
455
00:17:22,200 --> 00:17:25,200
Another strong candidate is the help desk identity reset,
456
00:17:25,200 --> 00:17:27,000
specifically for executives and admins,
457
00:17:27,000 --> 00:17:29,600
because those requests usually come with high pressure
458
00:17:29,600 --> 00:17:31,600
and result in immediate access changes.
459
00:17:31,600 --> 00:17:34,600
You need to take that specific workflow apart step by step.
460
00:17:34,600 --> 00:17:36,400
Look at where the request starts
461
00:17:36,400 --> 00:17:41,000
and identify where a person is currently inferring trust from an email, a chat or a phone call.
462
00:17:41,000 --> 00:17:45,200
Then find the exact point where a system finally releases the money or resets the access.
463
00:17:45,200 --> 00:17:50,000
Mapping this out is vital because the weak point is rarely where the message first arrives.
464
00:17:50,000 --> 00:17:54,400
The real failure happens where the business converts a believable request into a final action.
465
00:17:54,400 --> 00:17:56,800
Once you have the map, you can define your issuer model.
466
00:17:56,800 --> 00:17:59,200
If the people involved are all internal
467
00:17:59,200 --> 00:18:01,600
and the authority stays inside your own tenant,
468
00:18:01,600 --> 00:18:03,800
org-issued credentials are your best bet.
469
00:18:03,800 --> 00:18:08,400
But if the workflow crosses company boundaries and the other side needs proof they can take with them,
470
00:18:08,400 --> 00:18:11,000
you should bring in a trusted ID verification partner.
471
00:18:11,000 --> 00:18:14,000
Choosing the wrong model creates massive friction later,
472
00:18:14,000 --> 00:18:17,400
when revocation and audit requirements start pulling in different directions.
473
00:18:17,400 --> 00:18:21,600
The next step is to put the verify directly into the system that already controls the outcome.
474
00:18:21,600 --> 00:18:23,600
Do not build a side portal or a lab demo,
475
00:18:23,600 --> 00:18:27,600
put it in the finance approval app, the service desk workflow, or the vendor change process.
476
00:18:27,600 --> 00:18:31,600
If the business system cannot ask for proof and stop the action when that proof is missing,
477
00:18:31,600 --> 00:18:33,200
the control stays optional.
478
00:18:33,200 --> 00:18:36,000
And we know that optional controls never survive real-world pressure.
479
00:18:36,000 --> 00:18:39,400
Keep your pilot group limited and measure three specific things.
480
00:18:39,400 --> 00:18:42,000
You need to know if the new step blocked risky requests,
481
00:18:42,000 --> 00:18:45,000
how much the handling time changed, and how many exceptions popped up.
482
00:18:45,000 --> 00:18:47,800
Those numbers will tell you exactly where the design is working
483
00:18:47,800 --> 00:18:50,000
and where people are trying to find a way around it.
484
00:18:50,000 --> 00:18:53,000
Expect people to push back because wallet setup can be slow
485
00:18:53,000 --> 00:18:55,200
and cross-device flows often confuse users.
486
00:18:55,200 --> 00:19:00,000
You might find that profile photos are outdated or that support teams don't know how to handle a failed presentation.
487
00:19:00,000 --> 00:19:01,800
None of this means the model is broken.
488
00:19:01,800 --> 00:19:04,200
It just means identity controls only become real
489
00:19:04,200 --> 00:19:06,400
when your operations can support them on a bad day,
490
00:19:06,400 --> 00:19:08,000
not just during a polished test.
491
00:19:08,000 --> 00:19:11,000
This is also why your fallback parts need strict discipline.
492
00:19:11,000 --> 00:19:13,000
If the exception route is too loose,
493
00:19:13,000 --> 00:19:15,200
attackers will find it and exploit it first.
494
00:19:15,200 --> 00:19:17,800
Keep your break glass options manual and documented
495
00:19:17,800 --> 00:19:21,400
and limit them to specific staff who understand the risk they are taking on.
496
00:19:21,400 --> 00:19:23,200
The fallback should be slower on purpose.
497
00:19:23,200 --> 00:19:26,200
It needs to feel different because it carries much more exposure.
498
00:19:26,200 --> 00:19:28,200
After the first workflow proves it works,
499
00:19:28,200 --> 00:19:30,200
you can expand by following the pattern.
500
00:19:30,200 --> 00:19:32,800
Reuse that same design for the next approval path
501
00:19:32,800 --> 00:19:35,200
that relies too much on simple recognition.
502
00:19:35,200 --> 00:19:38,400
Keep your credential types narrow and keep the verifier close to the action.
503
00:19:38,400 --> 00:19:40,400
You scale through repeated control logic,
504
00:19:40,400 --> 00:19:42,600
not through broad claims or vague governance.
505
00:19:42,600 --> 00:19:44,800
At that point, the conversation finally changes.
506
00:19:44,800 --> 00:19:46,800
It stops being just an architecture discussion
507
00:19:46,800 --> 00:19:48,400
and becomes a leadership decision.
508
00:19:48,400 --> 00:19:50,400
It forces a choice about who owns the risk
509
00:19:50,400 --> 00:19:53,600
when a high value action still depends on nothing but an email signal.
510
00:19:53,600 --> 00:19:55,400
What leaders need to change now?
511
00:19:55,400 --> 00:19:58,800
Business email compromise should no longer sit in the box labeled email security.
512
00:19:58,800 --> 00:20:03,000
That framing is far too small and it keeps your spending pointed at detection
513
00:20:03,000 --> 00:20:04,800
while your approval logic stays weak.
514
00:20:04,800 --> 00:20:07,400
Leadership needs to ask three very direct questions.
515
00:20:07,400 --> 00:20:10,600
First, what business decisions still trust email by default?
516
00:20:10,600 --> 00:20:15,200
Second, which specific people can actually prove their authority inside those workflows?
517
00:20:15,200 --> 00:20:18,200
And finally, where can money or records still move
518
00:20:18,200 --> 00:20:20,000
without cryptographic verification?
519
00:20:20,000 --> 00:20:23,400
Answering those questions shifts both your budget and your accountability.
520
00:20:23,400 --> 00:20:26,800
Verified ID is not a side experiment for your innovation teams.
521
00:20:26,800 --> 00:20:31,800
It belongs in your zero trust execution tied directly to identity and risk ownership.
522
00:20:31,800 --> 00:20:34,200
You should set one standard that everyone can measure.
523
00:20:34,200 --> 00:20:37,200
No high-risk approval happens without proof of authority.
524
00:20:37,200 --> 00:20:40,400
That one policy is what finally forces the model to change it.
525
00:20:40,400 --> 00:20:43,800
Deepfake attacks work because we still trust signals that anyone can fake.
526
00:20:43,800 --> 00:20:45,800
The model is broken.
527
00:20:45,800 --> 00:20:50,400
If this changed how you think, follow me, Mirko Peters on LinkedIn and leave a review.
528
00:20:50,400 --> 00:20:52,400
Tell me which topic I should break down next.







