Enterprise Migration Strategy: Moving Legacy Systems to Azure Without Breaking the Business
Most cloud migrations don’t fail because of technical choices. They fail because leadership frames migration as an IT project instead of an operating model change. Moving servers is easy. Moving decision-making, accountability, and enforcement is not.
In this episode, we unpack why cloud amplifies organizational behavior rather than fixing it. Azure doesn’t break systems — it exposes identity drift, policy gaps, unmanaged exceptions, and delivery teams improvising at scale. That’s why so many migrations “go fine” technically and still disrupt the business on Monday morning.
The core mistake is sequencing. Organizations migrate workloads before they establish a platform that can enforce intent: identity, policy, networking, logging, and subscription boundaries. Every exception approved during migration becomes permanent debt, and governance throughput quickly collapses.
The path forward is simple but uncomfortable: platform first, then a repeatable migration factory, then modernization that compounds instead of fragments. Cloud success isn’t about where workloads run. It’s about whether the enterprise can change safely, predictably, and under control.
Most organizations believe cloud migrations fail because of technical choices: the wrong Azure service, the wrong SKU, the wrong network design.
They’re wrong.
Migrations fail because leadership frames them as IT projects instead of operating model changes. “Move the servers.” “Hit the date.” “Don’t disrupt the business.” That framing guarantees disruption — not because compute moves, but because entropy does.
In this episode, we unpack the uncomfortable truth: cloud doesn’t break organizations — it exposes how they already operate. Identity drift, policy gaps, unmanaged exceptions, and delivery teams improvising at scale don’t appear because of Azure. Azure just removes the friction that used to hide them.
The promise of this episode is simple and practical: platform first, then sequencing, then modernization that compounds instead of collapses.
Core Thesis
If you migrate chaos to the cloud, you don’t get agility.
You get expensive chaos.
Nothing breaks technically in failed migrations.
Everything breaks systemically.
Why “Migration as an IT Project” Always Fails
The foundational mistake is treating legacy as old hardware.
Legacy isn’t servers in a basement.
It’s socio-technical debt:
-
Undocumented dependencies
-
Approval flows wired into people
-
Audit evidence stored in tribal memory
-
Workarounds that only work because three specific humans know the sequence
When leaders say “we’re moving to Azure,” they often mean changing infrastructure.
What they’re actually doing is changing how decisions are enforced — or pretending they can avoid that change.
They can’t.
Cloud accelerates whatever operating model already exists. A faster conveyor belt doesn’t fix a messy factory floor. It spreads the mess faster.
The Anti-Pattern Cluster (and Why It’s Predictable)
Across industries, the same signals appear:
-
Leadership mandates speed: “We’ll tighten controls later”
-
Delivery hears: “Governance is optional”
-
Security hears: “Accept risk until audit”
-
Finance hears: “We’ll fix cost after the data center exit”
-
Platform teams get timelines, not authority
What gets measured:
-
Apps migrated
-
Servers decommissioned
-
Percent complete
Those are activity metrics.
They say nothing about whether the business can operate.
The outcomes that actually matter:
-
Time from idea to production
-
Stability under change
-
Predictable cost-to-serve
-
Safe parallel onboarding of teams
Cloud migrations are justified by outcomes — not architectures.
The Real Risk: Exceptions Are Entropy Generators
In IT projects, exceptions feel temporary.
In cloud platforms, exceptions are permanent until removed, and removal is political once the business depends on them.
“We’ll centralize later” isn’t a lie.
It’s a misunderstanding of physics.
Once teams have a working path, they keep using it. Once the business depends on that path, you can’t remove it without a fight. Azure doesn’t create this behavior — it exposes it.
Lift-and-shift becomes the destination not by strategy, but by inertia.
Failure Story: The Cutover That “Went Fine”
Two real-world patterns repeat across regulated industries:
Financial Services
-
Apps migrated successfully
-
Availability was fine
-
Monday morning access exploded
-
Audit trails fragmented
-
“Temporary” access became permanent
-
Compliance evidence moved into spreadsheets
Healthcare
-
Weekend cutover succeeded
-
Portal was live Monday morning
-
Identity split between on-prem and Entra
-
Helpdesk flooded
-
Emergency access became normal access
-
Logs scattered across subscriptions
In both cases:
Nothing broke technically. Everything broke systemically.
The invisible constraint was governance throughput — the rate at which access, policy, logging, and evidence can be safely enforced.
The Preventing Principle Everyone Skips
It’s boring. That’s why it works.
Establish the platform landing zone before migrating anything that matters.
Because:
-
The first workload sets precedent
-
Precedent becomes pattern
-
Pattern becomes platform
If your first migration task is moving workloads, you’ve already failed — not technically, but systemically.
Reframing Azure Correctly
Azure is not a destination.
It’s a control plane — a distributed decision engine that enforces identity, policy, network, and operational intent at scale if you define that intent in executable form.
Azure without governance isn’t flexibility.
It’s outsourced entropy.
Landing zones exist because enterprises need rules that survive:
-
Reorgs
-
Leadership changes
-
Delivery pressure
-
Audit scrutiny
You’re not building an Azure environment.
You’re building an enterprise that happens to run on Azure.
Executive Takeaways
-
Migration is an operating model change, not an IT task
-
Exceptions compound faster than teams can repay them
-
Governance throughput is a real constraint
-
Lift-and-shift fails when it becomes a destination
-
Platform first isn’t bureaucracy — it’s entropy control
If leadership wants speed, the answer isn’t fewer guardrails.
It’s guardrails that don’t require negotiation.
Closing Thought
Cloud migrations don’t fail in Terraform.
They fail in framing.
Nothing broke technically.
Everything broke systemically.
And that’s why the next step isn’t “what moves first.”
It’s defining Azure correctly — as the control plane that makes intent enforceable when nobody’s watching.
1
00:00:00,000 --> 00:00:03,960
Most organizations think Cloud Migrations fail because they pick the wrong Azure servers,
2
00:00:03,960 --> 00:00:05,600
the wrong network model, the wrong skew.
3
00:00:05,600 --> 00:00:06,400
They are wrong.
4
00:00:06,400 --> 00:00:09,800
Migrations fail because leadership frames them as an IT project.
5
00:00:09,800 --> 00:00:12,600
Move the servers, hit the date, don't disrupt the business.
6
00:00:12,600 --> 00:00:16,300
That framing guarantees disruption because the business isn't disrupted by compute.
7
00:00:16,300 --> 00:00:22,000
It's disrupted by entropy, identity drift, policy gaps, uncontrolled exceptions,
8
00:00:22,000 --> 00:00:24,200
and delivery teams improvising at scale.
9
00:00:24,200 --> 00:00:25,300
Today, this gets simple.
10
00:00:25,300 --> 00:00:28,600
Platform first, then sequencing, then modernization that compounds.
11
00:00:28,600 --> 00:00:29,800
Remember this line.
12
00:00:29,800 --> 00:00:32,900
Nothing broke technically, everything broke systemically.
13
00:00:32,900 --> 00:00:36,800
The foundational misunderstanding, migration as an IT project.
14
00:00:36,800 --> 00:00:40,200
The foundational mistake is treating legacy as old hardware.
15
00:00:40,200 --> 00:00:42,500
Legacy is not servers in a basement.
16
00:00:42,500 --> 00:00:47,200
Legacy is sociotechnical debt, brittle software, undocumented dependencies,
17
00:00:47,200 --> 00:00:51,300
approvals wired into people, ordered evidence stored in tribal memory,
18
00:00:51,300 --> 00:00:55,100
and business processes that only work because three specific humans
19
00:00:55,100 --> 00:00:57,800
know which work around to run on Tuesday nights.
20
00:00:57,800 --> 00:00:59,000
That distinction matters.
21
00:01:00,000 --> 00:01:05,600
When leaders say we're moving to Azure, they often mean we're changing where the infrastructure lives,
22
00:01:05,600 --> 00:01:09,000
but what the organization is actually doing is changing its operating model.
23
00:01:09,000 --> 00:01:11,000
Or pretending it can avoid that change.
24
00:01:11,000 --> 00:01:11,800
It cannot.
25
00:01:11,800 --> 00:01:14,800
Azure doesn't magically make a broken operating model behave.
26
00:01:14,800 --> 00:01:15,800
It amplifies it.
27
00:01:15,800 --> 00:01:19,000
In the same way a faster conveyor belt doesn't fix a messy factory floor.
28
00:01:19,000 --> 00:01:20,300
It spreads the mess faster.
29
00:01:20,300 --> 00:01:24,200
If you migrate chaos to the cloud, you don't get agility, you get expensive chaos.
30
00:01:24,200 --> 00:01:27,200
And the anti-pattern cluster is painfully consistent.
31
00:01:27,200 --> 00:01:28,700
Leadership mandates speed.
32
00:01:28,700 --> 00:01:29,400
Move fast.
33
00:01:29,400 --> 00:01:31,100
We'll tighten controls later.
34
00:01:31,100 --> 00:01:32,500
Delivery teams here.
35
00:01:32,500 --> 00:01:34,500
Ship now governance is optional.
36
00:01:34,500 --> 00:01:37,800
Security hears except risk until the audit shows up.
37
00:01:37,800 --> 00:01:42,000
Finance here, Zyleson will figure out cost management once we've exited the data center.
38
00:01:42,000 --> 00:01:45,800
And the platform team if it even exists gets handed a timeline, not authority.
39
00:01:45,800 --> 00:01:47,100
So what gets measured?
40
00:01:47,100 --> 00:01:51,300
Apps migrated, servers decommissioned, terabytes moved, percent complete.
41
00:01:51,300 --> 00:01:52,600
Those are activity metrics.
42
00:01:52,600 --> 00:01:53,800
They're comforting.
43
00:01:53,800 --> 00:01:56,500
They're also irrelevant to whether the business can operate
44
00:01:56,500 --> 00:01:58,500
because the outcomes that matter are different.
45
00:01:58,500 --> 00:02:00,100
How long from idea to production?
46
00:02:00,100 --> 00:02:01,900
How stable is production when change happens?
47
00:02:01,900 --> 00:02:04,300
How predictable is cost to serve per workload?
48
00:02:04,300 --> 00:02:07,300
How many teams can onboard safely without inventing their own cloud?
49
00:02:07,300 --> 00:02:10,300
Cloud migrations are justified by outcomes, not architectures.
50
00:02:10,300 --> 00:02:14,100
Now here's why the IT project framing keeps producing executive surprise.
51
00:02:14,100 --> 00:02:18,100
In an IT project you assume the environment is stable, the requirements are knowable,
52
00:02:18,100 --> 00:02:20,400
and the main risk is technical execution.
53
00:02:20,400 --> 00:02:22,900
In an enterprise migration, the environment is not stable.
54
00:02:22,900 --> 00:02:25,500
The business keeps changing, the org keeps reorganizing.
55
00:02:25,500 --> 00:02:28,400
Compliance expectations evolve, the threat model evolves.
56
00:02:28,400 --> 00:02:30,500
Vendor contracts shift.
57
00:02:30,500 --> 00:02:36,500
And most importantly, every exception you approve today becomes a permanent pathway tomorrow.
58
00:02:36,500 --> 00:02:40,000
Exceptions are not one-time decisions, they are entropy generators.
59
00:02:40,000 --> 00:02:43,300
This is why we'll centralize later is a lie you tell yourself.
60
00:02:43,300 --> 00:02:47,100
Not because people are dishonest, because once delivery teams have a working path,
61
00:02:47,100 --> 00:02:48,500
they will keep using it.
62
00:02:48,500 --> 00:02:52,700
And once the business is dependent on that path, you can't remove it without a fight.
63
00:02:52,700 --> 00:02:55,700
The cloud doesn't create this behavior, it exposes it.
64
00:02:55,700 --> 00:03:00,000
So when leadership says just lift and shift first, what they're often doing is buying time.
65
00:03:00,000 --> 00:03:02,900
And time is fine if you spend it building the control plane.
66
00:03:02,900 --> 00:03:06,900
But most organizations spend that time doing more lifts, more shifts and more exceptions.
67
00:03:06,900 --> 00:03:10,100
Therefore the lift and shift becomes the destination.
68
00:03:10,100 --> 00:03:15,000
And then everyone acts confused when cost goes up, risk goes up and delivery gets slower.
69
00:03:15,000 --> 00:03:18,000
Let's anchor this with a failure story, because slides won't save you.
70
00:03:18,000 --> 00:03:24,000
A financial services org regulated, ordered heavy and absolutely dependent on clean separation of duties.
71
00:03:24,000 --> 00:03:26,000
Decided to modernize quickly.
72
00:03:26,000 --> 00:03:30,200
Leadership intent was simple, migrate a set of internal finance apps over a quarter,
73
00:03:30,200 --> 00:03:34,000
keep the same access model and cleanup governance after the move.
74
00:03:34,000 --> 00:03:36,000
What actually happened was predictable.
75
00:03:36,000 --> 00:03:39,600
The apps moved, the cut over succeeded, availability was fine.
76
00:03:39,600 --> 00:03:46,600
Then Monday came, access requests exploded, because the older approval pathways didn't map cleanly to as your roles and enter groups.
77
00:03:46,600 --> 00:03:50,200
Audit trails became fragmented because logging wasn't centralized.
78
00:03:50,200 --> 00:03:55,400
And retention assumptions were different. Teams started creating temporary workarounds, ad hoc role assignments,
79
00:03:55,400 --> 00:03:59,800
shared accounts, manual exports of logs and spreadsheet-based evidence for compliance.
80
00:03:59,800 --> 00:04:03,500
Nothing broke technically, everything broke systemically.
81
00:04:03,500 --> 00:04:06,500
The invisible constraint they ignored was governance throughput.
82
00:04:06,500 --> 00:04:15,000
In regulated environments, the rate at which you can safely change access policy logging and evidence is slower than the rate at which delivery teams can ship infrastructure.
83
00:04:15,000 --> 00:04:18,000
So if you migrate workloads faster than you can enforce intent,
84
00:04:18,000 --> 00:04:22,300
you accumulate governance debt faster than you can repay it. That debt doesn't sit quietly.
85
00:04:22,300 --> 00:04:27,800
It shows up as blocked work, audit panic, and incident response that can't answer basic questions.
86
00:04:27,800 --> 00:04:31,600
The principle that would have prevented it is boring and that's why people skip it.
87
00:04:31,600 --> 00:04:36,800
Establish the landing zone, identity, policy, network, logging and subscription boundaries
88
00:04:36,800 --> 00:04:38,800
before you migrate anything that matters.
89
00:04:38,800 --> 00:04:42,500
Because the first workload sets the precedent, the precedent becomes the pattern.
90
00:04:42,500 --> 00:04:45,400
The pattern becomes the platform, whether you designed it or not.
91
00:04:45,400 --> 00:04:50,400
And this is the uncomfortable truth if your first migration task is moving workloads, you've already failed.
92
00:04:50,400 --> 00:04:54,000
So before we talk about what moves first, we have to define Azure correctly,
93
00:04:54,000 --> 00:04:58,000
not as a place, as the control plane that makes intent enforceable at scale.
94
00:04:58,000 --> 00:05:03,200
Failure story, the cutover that went fine, they love telling this story because it sounds like competence.
95
00:05:03,200 --> 00:05:10,200
A healthcare and life sciences organization planned a weekend cutover for a legacy but stable patient portal and a couple of backend services.
96
00:05:10,200 --> 00:05:18,300
Leadership intent was no downtime, weekend only disruption, keep the same controls for now and will modernize the governance after we're safely in Azure.
97
00:05:18,300 --> 00:05:22,000
In other words, move the workload first, then fix the operating model later.
98
00:05:22,000 --> 00:05:24,600
What actually happened is the part nobody puts in the status deck.
99
00:05:24,600 --> 00:05:31,400
The cutover itself went fine, data synced, DNS flipped, endpoints responded, the portal came up on Monday morning and technically it worked.
100
00:05:31,400 --> 00:05:33,000
Then the business tried to operate.
101
00:05:33,000 --> 00:05:39,600
Clinicians couldn't access the right records because the identity model was now half on prem groups, half intra groups, with inconsistent mappings.
102
00:05:39,600 --> 00:05:46,000
The help desk got flooded with it worked on Friday tickets and the only answer was to add more people to more groups faster.
103
00:05:46,000 --> 00:05:54,200
Researchers lost access to shared data sets because storage permissions had been recreated manually and the inheritance model wasn't what anyone assumed.
104
00:05:54,200 --> 00:06:05,200
Audit asked for evidence of who approved what access changed during the cutover window and the logs were split across subscriptions with different retention, different workspace locations and different owners.
105
00:06:05,200 --> 00:06:09,200
By noon you had the usual symptoms, emergency access grants,
106
00:06:09,200 --> 00:06:16,600
ad hoc role assignments, shared accounts just for today and a spreadsheet titled something like Access Fixes Final V7.
107
00:06:16,600 --> 00:06:19,600
Nothing broke technically, everything broke systemically.
108
00:06:19,600 --> 00:06:21,600
Here's the invisible constraint they ignored.
109
00:06:21,600 --> 00:06:24,600
Governance debt accumulates faster than delivery can patch it.
110
00:06:24,600 --> 00:06:27,200
When you move a workload you don't just move compute.
111
00:06:27,200 --> 00:06:31,200
You move every surrounding dependency that used to be free because it was implicit.
112
00:06:31,200 --> 00:06:36,800
Identity flows, approval flows, logging flows, evidence flows, in regulated environments those flows are the business.
113
00:06:36,800 --> 00:06:42,800
The application is just the user interface on top and the minute you cut over the business doesn't ask is the VM running?
114
00:06:42,800 --> 00:06:47,600
The business asks can my people do their jobs and can I prove they did them under control?
115
00:06:47,600 --> 00:06:53,600
If you don't pre-build the guardrails teams improvise them and improvised guardrails don't converge into a clean design.
116
00:06:53,600 --> 00:06:58,200
They converge into a pile of exceptions that nobody wants to touch again. That's the physics of it.
117
00:06:58,200 --> 00:07:01,400
The preventing principle is not complicated but it is unpopular.
118
00:07:01,400 --> 00:07:08,400
Treat migration as an operating model change and build the guardrails before the first workload earns the right to move.
119
00:07:08,400 --> 00:07:20,400
That means landing zone policies exist, management group inheritance exists, subscription boundaries exist, logging a centralized, identity is consistent, and the approval model is real, not planned real.
120
00:07:20,400 --> 00:07:24,400
Because once the business resumes operations you will not get the quiet time back.
121
00:07:24,400 --> 00:07:31,200
Every day after cutover becomes a negotiation between delivery speed and control and control loses when it isn't enforced by design.
122
00:07:31,200 --> 00:07:34,600
So the cutover went fine, the business week didn't.
123
00:07:34,600 --> 00:07:37,400
And that's why the next step is defining Azure correctly.
124
00:07:37,400 --> 00:07:41,400
Before you define a migration plan, Azure isn't a destination you arrive at.
125
00:07:41,400 --> 00:07:44,400
It's a control plane, you either establish or you don't.
126
00:07:44,400 --> 00:07:47,200
Azure is not the destination, it's the control plane.
127
00:07:47,200 --> 00:07:50,000
Most organizations talk about Azure like it's a location.
128
00:07:50,000 --> 00:07:52,000
Move it to Azure as soon as we're in Azure.
129
00:07:52,000 --> 00:07:54,000
Once we get to Azure we'll modernize.
130
00:07:54,000 --> 00:07:58,600
That language is the first tell that the program is about to drift into conditional chaos.
131
00:07:58,600 --> 00:08:02,000
Because Azure is not a destination, it's not someone else's data center.
132
00:08:02,000 --> 00:08:06,400
It's not a magic elasticity engine that turns legacy into modern by proximity.
133
00:08:06,400 --> 00:08:15,000
In architectural terms, Azure is a control plane, a distributed decision engine that can enforce your intent across identity network, compute data and operations at scale.
134
00:08:15,000 --> 00:08:19,200
If you actually define that intent in a way the platform can compile into policy.
135
00:08:19,200 --> 00:08:27,000
That distinction matters on prem control is often social a handful of infrastructure people know what good looks like and everything roots through them.
136
00:08:27,000 --> 00:08:30,000
That model doesn't scale but it feels safe because it's familiar.
137
00:08:30,000 --> 00:08:35,600
In Azure the system will let you create almost anything, almost anywhere with almost any shape of access unless you prevent it.
138
00:08:35,600 --> 00:08:38,200
Azure is not a strict gatekeeper, it's an accelerant.
139
00:08:38,200 --> 00:08:44,400
So if your intent isn't encoded into governance policy identity boundaries subscription design logging requirements,
140
00:08:44,400 --> 00:08:46,600
then your organization isn't using Azure.
141
00:08:46,600 --> 00:08:50,400
It's it's renting entropy Azure without governance is just outsourced entropy.
142
00:08:50,400 --> 00:08:52,400
And this is why landing zones exist at all.
143
00:08:52,400 --> 00:08:55,400
Not because Microsoft wanted a prettier reference diagram.
144
00:08:55,400 --> 00:09:00,400
Landing zones exist because enterprises need a way to make rules durable when the organization is not.
145
00:09:00,400 --> 00:09:02,200
You're not trying to build an Azure environment.
146
00:09:02,200 --> 00:09:05,400
You're trying to build an enterprise environment that happens to run on Azure.
147
00:09:05,400 --> 00:09:09,600
The real product you want out of Azure is not a VM or AKS or app service.
148
00:09:09,600 --> 00:09:16,000
The real product is standardization standard identity flows standard network paths, standard policy enforcement,
149
00:09:16,000 --> 00:09:22,800
standard logging and evidence standard subscription boundaries because that's what gives you the only thing the business actually cares about predictable change.
150
00:09:22,800 --> 00:09:28,600
And this is the executive framing that stops people from doing migration theater executives don't need a tour of services.
151
00:09:28,600 --> 00:09:33,400
They need a set of outcomes that the platform enables and a way to measure drift.
152
00:09:33,400 --> 00:09:39,600
Time to market how long from idea to production without begging the platform team for exceptions risk instability,
153
00:09:39,600 --> 00:09:46,200
whether the security posture stays consistent as more teams on board and whether incident response can answer questions without archaeology.
154
00:09:46,200 --> 00:09:52,400
Cost to serve predictability not did we save money but can we forecast the cost of running this workload without surprises.
155
00:09:52,400 --> 00:10:01,200
Organizational throughput how many teams can move in parallel without each one inventing their own cloud cloud migrations are justified by outcomes not architectures.
156
00:10:01,200 --> 00:10:14,000
Now here's the uncomfortable truth most organizations treat governance like a compliance artifact a spreadsheet a deck a set of best practices that people will follow because it's reasonable they want governance that relies on memory is not governance.
157
00:10:14,000 --> 00:10:32,600
It's a suggestion in Azure your governance has to be executable that means policy driven it means identity driven it means designed into the path teams use so the right way is the easy way because if you make the secure path slower than the insecure path you've created a market and the business will buy speed so Azure correctly defined becomes the platform where you enforce your assumptions at scale.
158
00:10:32,600 --> 00:10:40,400
It's where you decide where workloads may run how they authenticate how they route traffic how they log and retain evidence how blast radius is contained.
159
00:10:40,400 --> 00:11:03,400
And once those decisions exist as code and policy then migration stops being improvisation it becomes onboarding that's the pivot migration is not move stuff migration is create a govern pathway then move things through it so when someone says as we're is our destination the correct response is no Azure is the control plane that makes the destination possible the destination is an organization that can chip change predictably under control.
160
00:11:03,400 --> 00:11:12,400
Without heroic exceptions and that's why the next thing we have to talk about is landing zones but not as diagrams as an organizational contract you either enforce or you don't.
161
00:11:12,400 --> 00:11:21,400
Failure story cloud adoption without a platform a financial services firm decided to unlock innovation by letting teams self serve as your subscriptions.
162
00:11:21,400 --> 00:11:32,400
Leadership intent sounded modern move fast reduce central bottlenecks and let product teams build what actually happened was not innovation it was variance every team created its own subscription.
163
00:11:32,400 --> 00:11:47,400
Every subscription had its own naming its own network decisions its own logging choices its own half implemented security controls some teams deployed straight to public and points because it was faster others build private and points but didn't wire DNS correctly a few teams created their own temporary and for
164
00:11:47,400 --> 00:12:14,400
the entire app registrations and service principles because the official process took too long nothing broke technically everything broke systemically the invisible constraint was that decentralization without a shared control plane doesn't create autonomy it creates unbounded divergence and divergence is expensive because it multiplies everything you have to govern every unique pattern requires a unique exception a unique audit story a unique incident response runbook and a unique person who knows how that one works.
165
00:12:14,400 --> 00:12:43,400
Then the audit showed up the order didn't ask do you have a sure it asked show me consistent controls where is your centralized logging where your mandatory policies how do you prove location restrictions encryption requirements and separation of duties how do you ensure privilege access is controlled the same way across workloads and the organization had no clean answer because there wasn't one as your environment there were hundreds of micro environments each with its own interpretation of reality so the response was predictable panic centralization a task force
166
00:12:43,400 --> 00:12:56,400
a moratorium on new subscriptions emergency policy assignments that broke workloads because teams had built against implicit freedoms suddenly the innovation program became an exception clearing operation this is the part executives don't enjoy hearing.
167
00:12:56,400 --> 00:13:12,400
Self service without guard rails does not scale it never has the preventing principle is simple you build the platform first then you give teams a paved road to consume it that means a landing zone exists before the first subscription is vended subscription creation is not an ad hoc portal activity it is an enforced pathway with the
168
00:13:12,400 --> 00:13:41,400
false pathway with defaults management group placement baseline policies logging networking integration and identity patterns already attached because if you let teams create the foundational shape of the cloud you're not democratizing innovation you're outsourcing architecture to whoever needed something on a Tuesday afternoon and once that happens the platform teams job becomes archaeology not enablement so if your leadership once teams can innovate faster in azure the real translation is teams can innovate faster when they don't have to reinvent identity networking
169
00:13:41,400 --> 00:13:54,400
security logging and compliance every time they deploy that only happens when the platform exists first which brings us to the non-negotiable piece everyone tries to treat as a diagram the landing zone contract
170
00:13:54,400 --> 00:14:10,400
landing zones are an organizational contract not a diagram landing zones are routinely misunderstood because people look at the picture and think the picture is the thing it isn't a landing zone is an organizational contract a set of enforceable boundaries that define how the enterprise will operate in azure not best practices not recommended architecture
171
00:14:10,400 --> 00:14:29,400
a contract because the business doesn't need an architecture diagram the business needs predictable outcomes and predictable outcomes require predictable constraints a landing zone contract has a few components that have to be explicit owned and enforced identity and access who can do what where and under what conditions
172
00:14:29,400 --> 00:14:58,400
this is entra, arbyach privileged access conditional access and the life cycle of identities in other words the right people using the right accounts through the right flows every time network topology and connectivity where traffic flows how it's inspected how DNS works how private access is achieved and how hybrid connectivity is handled this is the difference between we can reach it and we can defend it security posture the baseline expectations that don't vary by team mood logging defender configurations sent an integration if you run it
173
00:14:58,400 --> 00:15:23,400
mandatory encryption key management decisions the point isn't to be perfect the point is to be consistent compliance and policy the rules that are executable region restrictions tagging resource constraints audit requirements not policies as documentation policies as enforcement subscription management how you create blast radius boundaries how you separate environments how you allocate budgets and how you keep teams from turning one subscription into a shared junk draw
174
00:15:23,400 --> 00:15:52,400
and notice what's absent from that list workloads because landing zones are not about the apps landing zones are about the environment that makes apps survivable at scale this is why the statement matters if your first migration task is moving workloads you've already failed because if the platform isn't real you have no contract and if you have no contract every workload becomes a negotiation teams will negotiate for public endpoints they will negotiate for temporary exclusions they will negotiate for one of identity patterns they will negotiate for relaxed policies just for this release
175
00:15:52,400 --> 00:16:21,400
and you will approve them not because you're reckless because you're trying to ship but every approved negotiation becomes a permanent pathway and those pathways accumulate so let's separate two concepts that get conflated platform landing zone versus application landing zones the platform landing zone is the shared enterprise substrate management groups policy assignments share services subscriptions centralized logging connectivity patterns and identity boundaries this is owned by a platform team not as a committee as a product application landing zones are where workloads live
176
00:16:21,400 --> 00:16:37,400
they inherit the contract they don't redefine it this boundary matters because it separates how the enterprise stays safe and governable from how teams deliver features when you blur it you either slow delivery to a crawl with centralized approvals or you lose control through uncontrolled divergence
177
00:16:37,400 --> 00:16:49,400
in regulated industries like financial services and health care landing zones are not optional because the audit doesn't care how fast you migrated the audit cares whether the control narrative is coherent without landing zones you can't tell a coherent story
178
00:16:49,400 --> 00:17:01,400
you have inconsistent evidence inconsistent logging inconsistent access models inconsistent network controls you don't just have gaps you have ambiguity and ambiguity is where investigations go to die skipping landing zones also removes your rollback story
179
00:17:01,400 --> 00:17:17,400
people love to talk about rollback like it's a technical plan restore backup flip DNS back failover but operational rollback is governance rollback if you don't know what policies were in place who had access what logs exist and what changed during the migration window you can't roll back to known good
180
00:17:17,400 --> 00:17:36,400
you can only roll back to we hope this is close enough that's not a plan that's wishful thinking so when someone asks do we really need landing zones the right answer is you need a contract before you need a migration because migration is an onboarding process into that contract and if you don't define the contract upfront your organization will still create one it'll just be accidental
181
00:17:36,400 --> 00:17:54,400
accidental contracts always favor speed over control until the bill arrives the hierarchy management groups and environment first governance once you accept that a landing zone is a contract you need a place to hang that contract where it won't get rewritten every time someone reorganizes the company that place is the management group hierarchy
182
00:17:54,400 --> 00:18:23,400
and this is where most organizations commit the quietest most expensive mistake they mirror the org chart they build management groups for retail banking commercial wealth claims research manufacturing ups whatever the current shape of the business is it feels intuitive it is also fragility encoded as governance because org charts change faster than policy can follow mergers happen leaders reshuffle teams split combine rename and get realigned quarterly your policy said doesn't your audit narrative doesn't your security controls shouldn't so the firm position is environment
183
00:18:23,400 --> 00:18:33,400
first hierarchy prod non prod sandbox and then underneath that carve by exposure and dependency not by politics
184
00:18:33,400 --> 00:18:41,400
core versus online is useful because network and threat models differ regulated versus non regulated is useful because control requirements differ
185
00:18:41,400 --> 00:18:59,400
confidential versus standard is useful because incident response and key management differ but the org chart is not a stable boundary if your cloud hierarchy mirrors your org chart you've built fragility into governance management groups are not a reporting structure they are policy inheritance surfaces that distinction matters
186
00:18:59,400 --> 00:19:23,400
management group is basically a scope where you attach rules that should apply to everything beneath it as your policy assignments are beacrole assignments compliance initiatives guardrails that should be consistent without negotiation subscriptions on the other hand are blast radius boundaries they are where you contain failure cost blowouts quota consumption accidental deletion noisy neighbor resource contention and operational mistakes
187
00:19:23,400 --> 00:19:42,400
subscription is not just a billing container it's the boundary that lets you say this team can break their stuff without breaking everyone else's so the hierarchy does two jobs at once management groups propagate intent subscriptions contain consequences now translate that into an environment first model at the top you have your root and your platform governance then you create
188
00:19:42,400 --> 00:20:06,400
environment management groups broad non-prod sandbox under those you create workload categories like core and online or whatever your threat model requires why does this work because it maps clearly to audit narratives auditors don't care which VP owns the system this month they care that production has tighter controls the non production that separation of duties exists that changes are controlled and that logging and evidence are consistent
189
00:20:06,400 --> 00:20:35,400
environment first hierarchy gives you that story without improvisation port policies stricter location enforcement stricter network rules deny public endpoints where possible mandatory diagnostic settings longer log retention stricter are back tighter privileged access controls non-prod policies similar shape but with intentionally relaxed constraints where it's rational still governed but less punitive sandbox guardrails that prevents stupidity from becoming a breach report but enough freedom for experimentation without filing tickets for every idea
190
00:20:35,400 --> 00:20:47,400
and because those rules are inherited through management groups you don't rebuild governance per subscription you don't negotiate it per project you attach it once and then you then subscriptions into the correct place that's the point
191
00:20:47,400 --> 00:20:59,400
this hierarchy becomes the subscription vending machines target map when a team requests a new production subscription it lands under prod inherits prod policies attaches to shared services and comes with logging and budget baselines
192
00:20:59,400 --> 00:21:16,400
when a team requests a deep test subscription it lands under non-prod inherits the right baseline and still stays observable and supportable and here's the part people miss until the first incident environment first governance is how you preserve separation of duties without turning your platform team into a gatekeeper
193
00:21:16,400 --> 00:21:31,400
because separation of duties is not nobody can do anything it is the system makes it hard to do the wrong thing and easy to do the right thing with clear ownership boundaries management groups enforce the baseline subscriptions isolate the impact and team still ship
194
00:21:31,400 --> 00:21:36,400
this is also why mirroring the org chart failed so badly in financial services and health care
195
00:21:36,400 --> 00:21:49,400
the moment a subscription moves because a business unit changes your inherited policy changes your logging changes your access model changes your compliance scope changes you've just turned an HR decision into a security change that's not governance that's conditional chaos
196
00:21:49,400 --> 00:22:02,400
so set the hierarchy by environment first then treat subscriptions as controlled blast radiuses and you'll finally have a foundation that survives reorganization supports audits and scales on boarding without really degrading every rule for every team
197
00:22:02,400 --> 00:22:17,400
shared services subscriptions identity security connectivity management once the hierarchy is environment first you need to build the shared services layer that every workload will quietly depend on this is where we'll centralize later dies in reality
198
00:22:17,400 --> 00:22:24,400
because shared services aren't optional conveniences they're the things that make your controls consistent and your operation survivable
199
00:22:24,400 --> 00:22:45,400
so the platform landing zone needs shared services subscriptions that are boring obvious and defended identity security connectivity and management start with the identity boundary because everything else is downstream of who is allowed to do what single tenant by default central ownership teams consume identity they never design it that is not ideology it is failure prevention identity
200
00:22:45,400 --> 00:22:59,400
is don't degrade gracefully they cascade they create parallel authentication paths inconsistent conditional access and audit trails that can't answer basic questions like which identity access this data from which device under what conditions
201
00:22:59,400 --> 00:23:10,400
and once you let one app temporarily bypass the standard identity model you've created a second truth then a third then incident response becomes archaeology you can decentralize workloads you cannot decentralize identity
202
00:23:10,400 --> 00:23:24,400
so the platform team owns entra id patterns naming conventions for groups life cycle for app registrations privileged access via pym conditional access baselines and the documented how do we on board an app flow
203
00:23:24,400 --> 00:23:38,400
apps don't get to improvise identity because they were in a hurry next is the security subscription and yes it deserves its own boundary security tooling is not just another shared service it is the part of the platform that must remain trustworthy when everything else is on fire
204
00:23:38,400 --> 00:23:58,400
centralize logging Microsoft Sentinel if you're using it defender for cloud configuration alert routing and incident response automation belong here the point is separation of duties and blast radius if an attacker compromises a workload subscription they should not be able to tamper with the monitoring that detects them if an internal admin makes a mistake in a workload subscription
205
00:23:58,400 --> 00:24:12,400
it should not break the organization's ability to investigate it so security goes into its own subscription under its own management group with its own access model tight minimal auditable then connectivity this is where too many organizations get clever too early
206
00:24:12,400 --> 00:24:27,400
and end up with a network that only the person who built it can explain connectivity is the shared pathway the hub the firewall or inspection layer DNS private endpoints alignment express route or VPN termination and the routing model that keeps traffic predictable
207
00:24:27,400 --> 00:24:41,400
and because we're going to take a firm stance later haven't spoke first that means the connectivity subscription owns the hub and the shared controls and workloads consume it through spokes or peering patterns that are standardized you are not building a bespoke network per application
208
00:24:41,400 --> 00:24:56,400
you are building one network posture that apps are allowed to attach to early on boring networks beat clever ones finally management this is not monitoring as a tool is this is management as an operating discipline log analytics workspaces for platform signals
209
00:24:56,400 --> 00:25:08,400
update management if you use it backup and recovery patterns inventory and the baseline observability contract that says every workload produces logs metrics and traces in a consistent way
210
00:25:08,400 --> 00:25:16,400
with retention that matches your risk profile because without management every team will pick a different logging approach and you won't get correlation you'll get noise
211
00:25:16,400 --> 00:25:29,400
now the mistake people make is thinking they can build workloads first and retrofit shared services later they can't once teams have shipped with local logging local DNS hacks local identity exceptions and local security tools those decisions become dependencies
212
00:25:29,400 --> 00:25:42,400
and dependencies have political weight you can change that it'll break production so drift becomes permanent that's why we'll centralize later fails not because centralization is hard but because undoing distributed decisions is harder than making them
213
00:25:42,400 --> 00:25:58,400
shared services are the paved road without them every team builds a go to trail and go trails don't converge into highways they converge into a map nobody trusts so build the shared services subscriptions early wire them to the management group hierarchy and make them the default dependency for every new subscription that gets vended
214
00:25:58,400 --> 00:26:11,400
because once shared services exist on boarding stops being negotiation it becomes a repeatable pathway network topology stance happens spoke first VW on earns its keep now we get to the part where enterprise migrations
215
00:26:11,400 --> 00:26:29,400
quietly die the network not because networking is hard in Azure it's hard everywhere it dies because people treat network topology as a preference not a constraint system they pick whatever looks modern whatever the last architect used whatever partner cells and then they retrofit security and operations around it that's backwards
216
00:26:29,400 --> 00:26:39,400
in a migration the network is the policy delivery mechanism it determines where traffic can go what it can touch what gets inspected what gets logged and how quickly you can explain a path during an incident review
217
00:26:39,400 --> 00:26:49,400
so the position is firm haven't spoke first always why because in early migrations you are not optimizing for elegance you are optimizing for predictability under stress
218
00:26:49,400 --> 00:27:00,400
hub and spoke gives you one place to centralize the things that must be consistent egress control inspection DNS and the you can't just open a public endpoint because you are in a hurry problem
219
00:27:00,400 --> 00:27:15,400
it also gives you a clean story in audio one hopper region spokes per workload subscription shared controls in the hub workloads attached through a defined pattern if someone asks how does traffic leave you can answer in one sentence early on boring network speed clever ones now the practical rationale
220
00:27:15,400 --> 00:27:27,400
most enterprises start with hybrid reality data centers MPLS express route or VPN legacy DNS and dependencies that still assume rfc 1918 space and predictable routing
221
00:27:27,400 --> 00:27:37,400
hub and spoke maps to that it gives you a single termination point for hybrid connectivity and a single place to enforce inspection before traffic hits the internet or your on-prem network
222
00:27:37,400 --> 00:27:45,400
an inspection is not optional even if you don't run a full firewall stack day one you still need centralized egress control and a path that can be instrumented
223
00:27:45,400 --> 00:27:51,400
because we'll add inspection later turns into we build a distributed bypass network and now everything depends on it
224
00:27:51,400 --> 00:27:58,400
that's not a security gap that architecture drift DNS is the other quiet killer private endpoints only work cleanly when DNS is consistent
225
00:27:58,400 --> 00:28:07,400
hub and spoke lets you centralize private DNS zones DNS forwarding and resolver patterns so applications don't invent their own name resolution hacks
226
00:28:07,400 --> 00:28:14,400
if each spoke team does DNS their own way private endpoints become a weekly incident then there's the real enterprise constraint
227
00:28:14,400 --> 00:28:22,400
routing exceptions every exception you add to just make this one app work becomes a permanent routing rule someone has to remember exists
228
00:28:22,400 --> 00:28:31,400
over time you get hub sprawl multiple hubs inconsistent UDRs inconsistent peering and temporary transitive routing that nobody can defend in an audit
229
00:28:31,400 --> 00:28:39,400
so the contract is one hub per region owned by the platform team spokes are cheap and disposable spokes are where workloads live the hub is where rules live
230
00:28:39,400 --> 00:28:47,400
now about V-1 virtual one is not bad it is not the enemy it's just not your first move VW and earns its keep when three conditions are already true
231
00:28:47,400 --> 00:28:51,400
you already operate globally you already understand your traffic patterns
232
00:28:51,400 --> 00:28:59,400
and you already have network operations maturity that can handle an abstracted routing fabric without turning every outage into portal archaeology
233
00:28:59,400 --> 00:29:06,400
because V-1 optimizes for scale and connectivity complexity lots of branches lots of regions lots of routing domains lots of interconnect needs
234
00:29:06,400 --> 00:29:14,400
but early migrations don't need that early migrations need a topology you can reason about when something breaks at 2 a.m. and hub and spoke wins that trade every time
235
00:29:14,400 --> 00:29:25,400
so the rule is simple start with hub and spoke instrument it stabilize it make it boring then if your footprint and routing complexity justify it migrate to VW undilibriately with a plan with testing with clear ownership
236
00:29:25,400 --> 00:29:35,400
not as a we should be cloud native gesture because cloud native networking isn't a topology it's your ability to enforce intent consistently while the organization keeps changing
237
00:29:35,400 --> 00:29:41,400
and the only network that helps you do that early is the one you can explain secure and troubleshoot without improvising
238
00:29:41,400 --> 00:29:50,400
failure story the identity exception that became a breach report a bank had a critical legacy application that couldn't authenticate cleanly with their standard entra patterns
239
00:29:50,400 --> 00:29:55,400
it was old it was fragile it had a vendor integration nobody wanted to touch
240
00:29:55,400 --> 00:30:02,400
leadership intent was the usual approve one exception keep the migration moving and normalize it later later never arrives
241
00:30:02,400 --> 00:30:16,400
what actually happened was predictable in slow motion they stood up a parallel authentication path a separate app registration a separate set of conditional access exclusions because the legacy client couldn't meet modern requirements a separate service account pattern because the vendor needed it
242
00:30:16,400 --> 00:30:27,400
and because the app mattered every control around it became negotiable so now they had to identity realities inside one tenant the governed path and the temporary path nothing broke technically everything broke systemically
243
00:30:27,400 --> 00:30:39,400
the invisible constraint they ignored was that identity is not a feature it's the enforcement layer for everything else zero trust depends on consistency conditional access only works when you don't let workloads define their own rules
244
00:30:39,400 --> 00:30:53,400
the moment you allow parallel path you've created an unmanaged surface that your monitoring and response processes don't understand and the weird part is how this shows up operationally access review started failing because the group and role model for that app didn't match the standard life cycles
245
00:30:53,400 --> 00:31:03,400
security couldn't answer basic questions like who has access and why because why was a slack thread from six months ago the platform team couldn't remove the exclusions because the business would treat it as an outage
246
00:31:03,400 --> 00:31:14,400
so the exception became permanent and permanent exceptions become invisible people stop seeing them as risk they see them as how that system works then a contractor account was compromised not a sophisticated nation state
247
00:31:14,400 --> 00:31:43,400
just credential reuse and bad hygiene the attack I didn't need to break into the govern systems they didn't need to fight the conditional access baseline they walked through the exception path because exceptions are where intent goes to die the breach report was brutal not because data was exfiltrated at scale but because the organization couldn't prove containment quickly incident response couldn't trace access cleanly across the parallel identity pathways the audit questions were simple and devastating why did this exception exist who approved it what compensating controls were in place and why
248
00:31:43,400 --> 00:32:12,400
didn't monitoring flag the access pattern earlier they had no clean answers because they didn't design the exception they tolerated it and you can't audit tolerance the principle that would have prevented it is boring and absolute centralized identity consistent access patterns and zero exceptions that create parallel authentication paths if a workload can't comply you don't cover whole in identity you sequence the workload you isolated or you keep it where it is until you can make it compliant you can decentralize workloads you cannot decentralize identity
249
00:32:12,400 --> 00:32:41,400
so now that the platform contract is real management groups shared services hub and spoke centralized identity the next step is where most programs still fail choosing what moves first workload prioritization is portfolio management not a migration queue most enterprises treat migration like a queue whatever screams loudest goes first the oldest hardware the expiring data center lease the app with the most noise from users the system of the cares about that is not a strategy that is emotional routing workload prioritization is portfolio management
250
00:32:41,400 --> 00:33:06,400
it is the discipline of deciding where you want to spend limited change capacity because migration capacity is finite not because as you has limits because your organization does you have a limited number of people who can do discovery without lying to themselves a limited number of teams who can remediate dependencies without breaking upstream systems a limited governance throughput for access changes and policy enforcement a limited ability to run parallel systems without burning out operations
251
00:33:06,400 --> 00:33:35,400
the portfolio has to be triaged not cute start with four dimensions that are simple enough to say out loud and brutal enough to be useful business criticality if it fails what breaks revenue customer trust regulatory exposure patient outcomes plant operations this is not important and this is consequence technical complexity not is it old but how tangled are the dependencies how stateful is it how coupled is it how hard is rollback and how many integrations will retaliate when you touch it
252
00:33:35,400 --> 00:33:59,400
the regulatory exposure what control narrative is required what audit evidence is required and how intolerant is the system to changes in identity logging and data residency modernization readiness do you have tests do you have c i c d do you have an owner who can make decisions do you have a team with the skills to change it and you have a path to operate it differently once it moves that distinction matters because readiness is not enthusiasm
253
00:33:59,400 --> 00:34:24,400
whether the workload can survive contact with reality now the executive lens has to be explicit because otherwise prioritization devolves into politics revenue impact does this workload enable growth or just keep lights on risk reduction will moving it reduce operational risk security risk or compliance risk in a measurable way team capability maturity can the team operate with automation observability and discipline change or are they still in ticket driven survival mode
254
00:34:24,400 --> 00:34:43,400
if you don't include team capability maturity your plan becomes a fantasy you will schedule refactors that the organization cannot execute and this is where we need the migration paths stated plainly for paths know you for misms rehost move it as is tactical temporary sometimes necessary
255
00:34:43,400 --> 00:35:01,400
it buys time but it does not buy modernization it also preserves technical debt so you need to be honest about that re platform move it with low friction upgrades that reduce operational pain manage databases app service containerization where it's justified standard logging and identity patterns this is where you get early wins without rewriting the business
256
00:35:01,400 --> 00:35:30,400
re factor change the architecture to exploit cloud native behaviors decoupled services replace brittle integrations build for scalability and resilience this is strategic expensive and only worth it when the workload is a differentiator retire deleted replace it shut it down this is the most ignored option and often the highest ROI not every workload deserves modernization some deserve deletion now the sequencing logic most organizations want to modernize the harder systems first because it feels like bravery it is usually stupidity
257
00:35:30,400 --> 00:35:51,400
crown jewels don't go first they go after the platform proves it can enforce intent and after teams prove they can operate inside the constraints without improvisations so what moves first you start with workloads that are meaningful enough to test the platform but not so critical that a learning experience becomes a business incident that usually means a customer facing web application with moderate criticality and a standard stack
258
00:35:51,400 --> 00:36:15,400
you re platform it you don't rewrite it you use it to teach teams the contract subscription boundaries ccd expectations logging baselines and identity patterns this is where teams learn how the platform really works then you address the systems that are business critical but intolerant to change core financial batch overnight processing settlement risk runs those systems are not where you prove cloud native purity they are where you prove stability
259
00:36:15,400 --> 00:36:34,840
so you re host first with discipline deterministic rollback parallel run words feasible validation gates and explicit sign off from the business re hosting here is not laziness it sequencing and while everyone argues about cloud native you quietly reclaim runway by killing junk internal reporting tools with low usage undocumented shadow it duplicate data
260
00:36:34,840 --> 00:36:55,840
marks one of workflows that exist because nobody had the courage to say no retire them don't migrate them don't re factor them end them the cheapest modernization is deletion the point of this portfolio framing is that migration is not a heroic march it is a controlled reduction of entropy you are building a govern pathway and then moving the right things through it in the right order with outcomes you can measure
261
00:36:55,840 --> 00:37:10,840
because if you don't choose intentionally the business will choose for you and the business will choose urgency over coherence every single time next will do this as a verbal tabletop exercise with three workloads so you can hear the decision logic in motion not as a framework diagram
262
00:37:10,840 --> 00:37:25,840
tabletop triage workload one customer facing web app re platform take workload one a customer facing web application in financial services not the core ledger not the overnight batch something real revenue adjacent but not the crown jewels profile it quickly
263
00:37:25,840 --> 00:37:39,840
medium criticality if it's down customers complain and conversions drop but the bank doesn't stop settling transactions moderate change frequency the business wants tweaks every sprint or two standard stack web front end API layer a database may be a caching tier
264
00:37:39,840 --> 00:37:49,840
and a couple integrations to systems of record that still live elsewhere this workload is where programs should start because it exposes reality without detonating the company now the decision
265
00:37:49,840 --> 00:38:03,840
re platform not re factor not cloud native rewrite re platform in other words move it into Azure and make a few low friction choices that reduce operational drag immediately without pretending you can redesign the business logic on day one
266
00:38:03,840 --> 00:38:26,840
so what is re platform mean in practice without turning this into Azure feature bingo it means you deliberately trade fragile bespoke operational responsibilities for managed ones you standardize how it deployed you standardize how it authenticates you standardize how it in its logs you standardize how it connects to dependencies and you inherit the landing zone contract instead of negotiating it this is where teams learn how the platform really works
267
00:38:26,840 --> 00:38:56,640
because the first lesson most teams learn in Azure is not about compute it's about constraints they learn that a subscription is not an empty canvas it's an environment with inherited policy they learn that networks have a shape and egress isn't open by default they learn that identity isn't whatever the app needs it's whatever the platform can audit and control they learn that logging is not optional because we'll add monitoring later it's part of the contract so for this workload the control should be explicit upfront landing zone inheritance the subscription lands in the correct manage
268
00:38:56,640 --> 00:39:07,240
management group picks up baseline policy and gets attached to shared services patterns no bespoke exceptions if it needs an exception that's a signal the platform contract is incomplete not that the app is special
269
00:39:07,240 --> 00:39:26,440
see I saw CD expectations it deploys through a pipeline with gated promotion into production with a rollback pattern that doesn't require heroics even if the code is ugly deployment should be boring boring is the point observability baseline the app produces logs metrics and traces in the standard way the platform expects if the incident response team can't answer what changed
270
00:39:26,440 --> 00:39:43,240
and what failed without hunting through five dashboards you didn't reply for me you relocated and then you choose the minimal modernization moves that reduce operational cost to serve maybe you move the database to a managed service or patching stops being a midnight ritual maybe you move static content to a service that scales predictably
271
00:39:43,240 --> 00:39:55,440
maybe you modernize secrets so the team stops hard coding credentials into deployment pipelines like it's 2009 but the key is sequencing the goal is to remove operational friction first because operational friction is what kills time to market
272
00:39:55,440 --> 00:40:10,840
and time to market is what executives actually noticed now why is this workload first because it creates a learning loop with tolerable consequences teams will misinterpret policies they will hit networking assumptions they didn't know they had they will discover their simple app depends on a legacy authentication flow
273
00:40:10,840 --> 00:40:20,440
a DNS shortcut or database permission model that only worked because it lived inside the data center good that's the point you want to discover those constraints while the blast radius is small
274
00:40:20,440 --> 00:40:24,740
And you want to turn those discoveries into platform improvements not one off fixes.
275
00:40:24,740 --> 00:40:29,200
So the output of this first migration isn't, the app is in Azure.
276
00:40:29,200 --> 00:40:33,020
The output is you now have a repeatable pattern for a class of workloads.
277
00:40:33,020 --> 00:40:36,220
And you have hardened the landing zone contract based on real behavior.
278
00:40:36,220 --> 00:40:37,040
That's compounding.
279
00:40:37,040 --> 00:40:39,200
And once that pattern exists, you can scale.
280
00:40:39,200 --> 00:40:43,180
You can onboard another customer facing app without reinventing everything.
281
00:40:43,180 --> 00:40:48,780
You can measure provisioning speed, deployment frequency, incident rates, and cost variance with the baseline you trust.
282
00:40:48,780 --> 00:40:56,460
Then and only then you earn the right to touch workload two, the core financial batch system that doesn't tolerate learning experiences.
283
00:40:56,460 --> 00:41:01,260
Tabletop triage, workload two, core financial batch, re-host now, refactor later.
284
00:41:01,260 --> 00:41:08,040
Workload two is the one executives always want to modernize properly first because it sounds like courage, the core financial batch system.
285
00:41:08,040 --> 00:41:13,180
This is the overnight processing layer, settlement runs, risk calculations, end of day reconciliations.
286
00:41:13,180 --> 00:41:18,560
The stuff that doesn't show up in your marketing site but quietly keeps the institution solvent and compliant.
287
00:41:18,560 --> 00:41:20,320
Profile it honestly.
288
00:41:20,320 --> 00:41:21,660
Business critical.
289
00:41:21,660 --> 00:41:24,220
If it fails, you don't have a bad user experience.
290
00:41:24,220 --> 00:41:30,900
You have a regulatory incident, missed cutoffs, downstream systems in undefined states, and finance calling people at home.
291
00:41:30,900 --> 00:41:32,020
Low-changed tolerance.
292
00:41:32,020 --> 00:41:34,920
This workload was built to be stable, not flexible.
293
00:41:34,920 --> 00:41:41,300
Deep legacy dependencies, file drops, brittle schedules, main frame adjacencies, that one database nobody is allowed to touch.
294
00:41:41,300 --> 00:41:45,360
And a chain of consumers that assume the batch finishes by 5 a.m. or the day is ruined.
295
00:41:45,360 --> 00:41:47,060
So what's the decision re-host now?
296
00:41:47,060 --> 00:41:48,820
Refactor later.
297
00:41:48,820 --> 00:41:51,000
And yes, that offends the cloud native pureists.
298
00:41:51,000 --> 00:41:53,120
That's fine. They don't run the risk register.
299
00:41:53,120 --> 00:41:54,960
Re-hosting here is not laziness.
300
00:41:54,960 --> 00:41:56,100
It's sequencing.
301
00:41:56,100 --> 00:41:59,460
Because the primary objective for this workload is not innovation.
302
00:41:59,460 --> 00:42:00,900
It's continuity under control.
303
00:42:00,900 --> 00:42:08,400
The business wants the same outputs on the same schedule with the same evidence, and ideally fewer late-night outages caused by dying hardware.
304
00:42:08,400 --> 00:42:12,300
So you move it with minimal functional change but maximum operational discipline.
305
00:42:12,300 --> 00:42:14,320
That distinction matters.
306
00:42:14,320 --> 00:42:19,140
A bad re-host is copy the VMs, hope it runs, and call it done.
307
00:42:19,140 --> 00:42:26,620
A good re-host is treat the migration as a controlled experiment with a deterministic rollback story and validation gates that are brutal.
308
00:42:26,620 --> 00:42:29,740
Start with the real constraint. Nobody wants to say out loud.
309
00:42:29,740 --> 00:42:32,120
Batch is a dependency graph, not a server.
310
00:42:32,120 --> 00:42:36,000
The batch job you think you're migrating is probably just one node in a chain.
311
00:42:36,000 --> 00:42:41,580
Upstream data extraction, transformation, load, reconciliation, report generation, and distribution.
312
00:42:41,580 --> 00:42:44,360
And half of those jobs are really just protocols.
313
00:42:44,360 --> 00:42:50,880
A file placed in a folder, a naming convention, a log file, an email alert, a runbook step, someone does manually add 2a.
314
00:42:50,880 --> 00:42:51,580
M.
315
00:42:51,580 --> 00:42:53,880
So the first question isn't what size VM do we need.
316
00:42:53,880 --> 00:42:58,180
It's what are the upstream and downstream contracts, and how do we prove they still hold?
317
00:42:58,180 --> 00:42:59,380
Now the controls.
318
00:42:59,380 --> 00:43:01,380
You need a deterministic rollback plan.
319
00:43:01,380 --> 00:43:03,180
Not we can restore from backup.
320
00:43:03,180 --> 00:43:09,680
Deterministic means if validation fails, you know exactly how you return to the last known good state, and you've rehearsed it.
321
00:43:09,680 --> 00:43:12,780
You don't improvise rollback on a system that closes the books.
322
00:43:12,780 --> 00:43:15,580
Then you use a parallel run strategy where it's feasible.
323
00:43:15,580 --> 00:43:23,580
Run the batch in Azure and on-prem in parallel for a defined period, compare outputs, reconcile differences, and only then cut over.
324
00:43:23,580 --> 00:43:27,980
Not forever, not until we feel good, but long enough to cover real scenarios.
325
00:43:27,980 --> 00:43:33,980
Month end, quarter end, unusual volumes, and the ugly edge cases that only show up when auditors are watching.
326
00:43:33,980 --> 00:43:38,080
Then you put validation gates in writing, completeness, did every expected output land.
327
00:43:38,080 --> 00:43:40,680
Accuracy, do totals reconcile?
328
00:43:40,680 --> 00:43:41,680
Timelinas?
329
00:43:41,680 --> 00:43:43,480
Did it finish inside the window?
330
00:43:43,480 --> 00:43:48,580
Security posture, did identity and access behaviors expected, and our logs retained where they need to be.
331
00:43:48,580 --> 00:43:50,480
And this is where the platform matters again.
332
00:43:50,480 --> 00:43:57,180
If your landing zone contract is real, you can place this workload into an environment with known controls, known logging, known network paths.
333
00:43:57,180 --> 00:44:04,580
Without that, every validation failure turns into an argument about whether the platform is stable or the workload is weird, you lose time in ambiguity.
334
00:44:04,580 --> 00:44:06,280
Now why refactor later?
335
00:44:06,280 --> 00:44:08,980
Because refactoring batch is architectural surgery.
336
00:44:08,980 --> 00:44:16,280
It often means breaking tight coupling, replacing file-based integration with event-driven patterns, changing data models, rewriting scheduling and orchestration,
337
00:44:16,280 --> 00:44:19,580
and renegotiating every downstream consumer contract.
338
00:44:19,580 --> 00:44:21,080
That is not phase one work.
339
00:44:21,080 --> 00:44:28,180
You refactor after you've stabilized operations, after you've proven observability, after the team has capacity, and after the platform has earned trust.
340
00:44:28,180 --> 00:44:35,680
Then refactoring becomes a choice with a business case, not an ideological requirement to be cloud-native, so the output of workload too isn't modernization.
341
00:44:35,680 --> 00:44:43,180
It's a stable, governed relocation that buys you time, reduces hardware risk, and sets up modernization without betting the bank on it.
342
00:44:43,180 --> 00:44:47,480
Tabletop triage, workload 3, internal reporting, shadow IT, retire.
343
00:44:47,480 --> 00:44:52,980
Workload 3 is the one nobody puts on the migration roadmap, even though it quietly consumes the most capacity.
344
00:44:52,980 --> 00:44:57,980
Internal reporting, shadow IT, the access database that became critical.
345
00:44:57,980 --> 00:45:01,480
The spreadsheet-driven workflow with a service account that everyone shares.
346
00:45:01,480 --> 00:45:08,780
The Power BI report pointed at an old data-mart that nobody patches because nobody owns it. Profile it honestly.
347
00:45:08,780 --> 00:45:13,080
Low usage, 10 people scream when it's down and you confuse that for importance.
348
00:45:13,080 --> 00:45:17,480
Poor documentation. If the original builder left, you have folklore, not runbooks.
349
00:45:17,480 --> 00:45:25,780
Duplicate functionality. It overlaps with the enterprise reporting platform, the finance warehouse, or the new analytics stack that will replace it someday.
350
00:45:25,780 --> 00:45:29,380
And yet it sits in the migration queue like it deserves investment. It doesn't.
351
00:45:29,380 --> 00:45:35,480
The decision here is retire, not re-host quickly, not move it because it's easy, but retire it.
352
00:45:35,480 --> 00:45:40,680
Because migration capacity is finite and this class of workload is pure distraction.
353
00:45:40,680 --> 00:45:47,980
It burns engineering time, adds audit scope, increases identity sprawl, and creates more exceptions that have to be defended forever.
354
00:45:47,980 --> 00:45:53,380
The cheapest modernization is deletion. Now, retirement isn't, turn it off and hope nobody notices.
355
00:45:53,380 --> 00:45:59,980
It's a controlled removal with a replacement narrative because the business will always ask the same question, what do we lose?
356
00:45:59,980 --> 00:46:02,980
So you do three things. First, you quantify the distraction cost.
357
00:46:02,980 --> 00:46:08,780
How many hours a month does the organization spend maintaining it, troubleshooting it, or feeding it data manually?
358
00:46:08,780 --> 00:46:11,980
How often does it break during month end because the underlying extract failed?
359
00:46:11,980 --> 00:46:15,780
How many people have access to it and how many of those accesses are justified?
360
00:46:15,780 --> 00:46:19,880
If you can't put a number on it, you can't retire it because you lose the argument to sentiment.
361
00:46:19,880 --> 00:46:22,180
Second, you shrink the risk surface before you kill it.
362
00:46:22,180 --> 00:46:26,180
Lockdown access, remove shared credentials, stop the service accounts, sprawl.
363
00:46:26,180 --> 00:46:28,380
Put it into a controlled read-only state if you can.
364
00:46:28,380 --> 00:46:30,580
You're not improving it because it deserves improvement.
365
00:46:30,580 --> 00:46:33,880
You're preventing it from becoming a breach report while you decommission it.
366
00:46:33,880 --> 00:46:36,280
Third, you replace the outcome, not the tool.
367
00:46:36,280 --> 00:46:43,180
If the report exists because finance needs a daily liquidity view, then you migrate that requirement to the enterprise reporting platform.
368
00:46:43,180 --> 00:46:46,080
If the app exists because operations needs a weekly inventory dashboard,
369
00:46:46,080 --> 00:46:48,880
build that dashboard in the system that already owns inventory.
370
00:46:48,880 --> 00:46:52,480
The point is the business requirement survives, the shadow implementation doesn't,
371
00:46:52,480 --> 00:46:54,980
and here's the governance payoff most people miss.
372
00:46:54,980 --> 00:46:57,880
Every retired workload buys you back policy coherence.
373
00:46:57,880 --> 00:47:00,980
It reduces exceptions, it reduces temporary firewall rules,
374
00:47:00,980 --> 00:47:03,880
it reduces weird identity artifacts that nobody wants to own,
375
00:47:03,880 --> 00:47:07,680
it reduces logging pipelines pointed at systems that shouldn't exist.
376
00:47:07,680 --> 00:47:10,980
It reduces your audit scope, which is real money in regulated industries.
377
00:47:10,980 --> 00:47:13,780
Deletion is not just cost savings, it's entropy reduction.
378
00:47:13,780 --> 00:47:16,380
An entropy reduction is what increases throughput.
379
00:47:16,380 --> 00:47:20,580
Because once you stop migrating junk, you have capacity to migrate systems that matter.
380
00:47:20,580 --> 00:47:23,780
Your platform team stops spending time supporting edge case nonsense.
381
00:47:23,780 --> 00:47:26,080
Your security team stops chasing phantom assets.
382
00:47:26,080 --> 00:47:29,580
Your delivery team stop inheriting undocumented liabilities.
383
00:47:29,580 --> 00:47:31,380
This is why put for your management matters.
384
00:47:31,380 --> 00:47:35,180
A migration program that never retires anything will eventually stall.
385
00:47:35,180 --> 00:47:40,380
Not because Azure is hard, but because every moved workload increases the number of things you have to govern.
386
00:47:40,380 --> 00:47:45,180
Retiring workloads is how you stop the migration factory from drowning in its own output.
387
00:47:45,180 --> 00:47:48,180
So by the time you've re-platformed a customer facing app,
388
00:47:48,180 --> 00:47:51,780
re-hosted the core batch safely and retired the internal junk,
389
00:47:51,780 --> 00:47:53,080
you've done something rare.
390
00:47:53,080 --> 00:47:55,380
You've created momentum without creating drift,
391
00:47:55,380 --> 00:47:58,180
and now you can finally talk about execution at scale.
392
00:47:58,180 --> 00:48:02,380
Faced modernization with a foundation that doesn't collapse under success.
393
00:48:02,380 --> 00:48:05,180
Faced modernization evolved, don't boil the ocean.
394
00:48:05,180 --> 00:48:06,580
Now the uncomfortable truth.
395
00:48:06,580 --> 00:48:10,080
Once you can tree-arge workloads, you still don't have a migration strategy.
396
00:48:10,080 --> 00:48:11,280
You have a list of decisions.
397
00:48:11,280 --> 00:48:16,080
The strategy is how those decisions compound without collapsing your organization.
398
00:48:16,080 --> 00:48:18,680
And that's why Big Bang modernize everything programs fail.
399
00:48:18,680 --> 00:48:23,080
Not because they're ambitious, because they create too many simultaneous moving parts,
400
00:48:23,080 --> 00:48:27,880
platform changes, workload changes, operating model changes, security model changes,
401
00:48:27,880 --> 00:48:30,880
and cost model changes all at once.
402
00:48:30,880 --> 00:48:33,680
That is not transformation that is uncontrolled coupling.
403
00:48:33,680 --> 00:48:36,980
So the right model is faced modernization, evolved, don't boil the ocean,
404
00:48:36,980 --> 00:48:39,780
three phases, simple enough to repeat, strict enough to enforce.
405
00:48:39,780 --> 00:48:43,980
Phase one is foundation, phase two is migration at scale, phase three is modernization.
406
00:48:43,980 --> 00:48:45,380
And yes, it sounds obvious.
407
00:48:45,380 --> 00:48:46,180
That's the point.
408
00:48:46,180 --> 00:48:48,280
If your program can't be explained in one minute,
409
00:48:48,280 --> 00:48:50,280
it won't survive the first executive reshuffle.
410
00:48:50,280 --> 00:48:52,580
Start with phase one, foundation.
411
00:48:52,580 --> 00:48:55,580
This is where you build the platform contract we've already been talking about.
412
00:48:55,580 --> 00:49:00,580
Landing zones, management group hierarchy, shared services, identity boundaries,
413
00:49:00,580 --> 00:49:04,080
networking, policy baselines, and observability.
414
00:49:04,080 --> 00:49:07,780
But phase one also includes the thing people consistently forget.
415
00:49:07,780 --> 00:49:09,980
The migration factory set up.
416
00:49:09,980 --> 00:49:11,980
Not a project team.
417
00:49:11,980 --> 00:49:12,980
A factory.
418
00:49:12,980 --> 00:49:16,280
A repeatable delivery system that turns a workload from,
419
00:49:16,280 --> 00:49:20,080
we think we understand it into its running in Azure, validated, and supportable.
420
00:49:20,080 --> 00:49:24,080
That means standards, runbooks, a definition of done, validation criteria,
421
00:49:24,080 --> 00:49:29,180
rollback patterns, exception handling that is explicit, tracked, and treated as an entropy generator.
422
00:49:29,180 --> 00:49:32,980
Because exceptions aren't just technical debt, they are governance debt with interest.
423
00:49:32,980 --> 00:49:37,180
And if you don't build the factory discipline before you scale, you will scale in consistency.
424
00:49:37,180 --> 00:49:39,380
Which brings us to phase two, migration at scale.
425
00:49:39,380 --> 00:49:42,280
This is where you stop treating each migration as an artisanal exercise.
426
00:49:42,280 --> 00:49:44,880
You run waves, each wave follows the same cadence,
427
00:49:44,880 --> 00:49:49,380
discover map dependencies, decide path, move, validate, stabilize,
428
00:49:49,380 --> 00:49:51,380
and you measure outcomes, not activity.
429
00:49:51,380 --> 00:49:54,980
Provisioning speed, how fast can a team get an environment that is production ready?
430
00:49:54,980 --> 00:49:56,380
Deployment frequency?
431
00:49:56,380 --> 00:49:58,780
Our teams shipping more often without heroics?
432
00:49:58,780 --> 00:49:59,980
Incident trends.
433
00:49:59,980 --> 00:50:03,180
Our incidents decreasing and when they happen is recovery faster.
434
00:50:03,180 --> 00:50:04,980
Cost variance.
435
00:50:04,980 --> 00:50:08,380
Can you forecast spend per workload or is it a surprise every month?
436
00:50:08,380 --> 00:50:13,580
Organizational throughput, how many teams can move safely in parallel without collapsing central governance?
437
00:50:13,580 --> 00:50:17,180
If you can't measure those, you're not running a factory, you're running a parade.
438
00:50:17,180 --> 00:50:21,380
And the factory model is what prevents the platform team from becoming a ticket queue.
439
00:50:21,380 --> 00:50:25,380
It creates paved roads, self-service with enforced guardrails,
440
00:50:25,380 --> 00:50:27,780
subscription vending that lands in the right management group,
441
00:50:27,780 --> 00:50:31,580
default policies, default logging, default network integration,
442
00:50:31,580 --> 00:50:33,380
default identity patterns,
443
00:50:33,380 --> 00:50:35,980
Teams get autonomy where autonomy is safe.
444
00:50:35,980 --> 00:50:38,780
They do not get autonomy where autonomy creates conditional chaos.
445
00:50:38,780 --> 00:50:40,980
Now phase three, modernization.
446
00:50:40,980 --> 00:50:43,580
This is where everyone wants to start.
447
00:50:43,580 --> 00:50:48,180
It's the shiny part, containers, manage services, event-driven integration,
448
00:50:48,180 --> 00:50:50,980
data platform evolution, cloud native everything.
449
00:50:50,980 --> 00:50:53,580
And this is where the program dies if you do it too early.
450
00:50:53,580 --> 00:50:56,980
Because modernization is a lagging indicator of platform maturity.
451
00:50:56,980 --> 00:51:01,580
If the platform can't provide consistent identity network, policy and observability,
452
00:51:01,580 --> 00:51:03,780
modernization becomes probabilistic.
453
00:51:03,780 --> 00:51:07,780
You can build something impressive, but you can't operate it consistently across teams.
454
00:51:07,780 --> 00:51:11,780
So it rots or it becomes bespoke or it becomes a dependency nobody understands.
455
00:51:11,780 --> 00:51:13,580
Same outcome, different packaging.
456
00:51:13,580 --> 00:51:15,180
So modernization has to be earned.
457
00:51:15,180 --> 00:51:19,580
You modernize when the organization can repeatedly deploy, monitor, secure
458
00:51:19,580 --> 00:51:23,380
and recover workloads in the new environment without improvisation.
459
00:51:23,380 --> 00:51:25,180
Then modernization compounds.
460
00:51:25,180 --> 00:51:28,780
You can adopt managed services because operational patterns are standardized.
461
00:51:28,780 --> 00:51:34,180
You can containerize where it's justified because CICD, networking and observability are stable.
462
00:51:34,180 --> 00:51:38,380
You can move toward event-driven integration because your identity and security posture
463
00:51:38,380 --> 00:51:40,380
can survive distributed systems.
464
00:51:40,380 --> 00:51:42,580
And here's the trap to call out explicitly.
465
00:51:42,580 --> 00:51:45,180
Forcing, refactoring, early kills momentum.
466
00:51:45,180 --> 00:51:48,580
It inflates timelines, creates architectural debates instead of delivery
467
00:51:48,580 --> 00:51:50,780
and turns the program into a standards war.
468
00:51:50,780 --> 00:51:53,380
Meanwhile, the data center lease clock keeps ticking
469
00:51:53,380 --> 00:51:55,380
and the business stops believing you can execute.
470
00:51:55,380 --> 00:51:57,380
So you sequence.
471
00:51:57,380 --> 00:52:01,380
Stabilize first, scale next, modernize when the platform can carry it.
472
00:52:01,380 --> 00:52:04,980
And through all three phases, you budget technical debt instead of pretending
473
00:52:04,980 --> 00:52:08,180
it will sort itself out because it won't.
474
00:52:08,180 --> 00:52:12,780
Dead either gets paid deliberately or it gets paid during incidents, audits and outages.
475
00:52:12,780 --> 00:52:15,580
Faced modernization is how you choose the payment schedule.
476
00:52:15,580 --> 00:52:20,380
And once you accept that, the migration program stops being a stressful sprint toward a date.
477
00:52:20,380 --> 00:52:24,580
It becomes a controlled enterprise capability, a system that keeps producing migrations
478
00:52:24,580 --> 00:52:27,380
safely without comes that improve over time.
479
00:52:27,380 --> 00:52:32,180
Next, we make phase one explicit what has to exist before the factory starts running.
480
00:52:32,180 --> 00:52:37,780
Phase one, foundation, landing zones, plus connectivity, plus policy, plus identity.
481
00:52:37,780 --> 00:52:41,180
Phase one is where most migration programs try to save time.
482
00:52:41,180 --> 00:52:42,580
And that's why they lose a year later.
483
00:52:42,580 --> 00:52:44,780
Foundation isn't a diagram and it isn't a workshop.
484
00:52:44,780 --> 00:52:48,980
It's the moment you turn intent into something the platform can enforce when nobody is watching.
485
00:52:48,980 --> 00:52:51,180
So phase one has deliverables, real ones.
486
00:52:51,180 --> 00:52:53,180
If you can't point to them, you don't have a foundation.
487
00:52:53,180 --> 00:52:54,580
You have a slide deck.
488
00:52:54,580 --> 00:52:58,780
First, the platform landing zone exists as deployed code, not as a reference.
489
00:52:58,780 --> 00:53:03,780
Management group hierarchy, subscription vending pathway, shared services subscriptions,
490
00:53:03,780 --> 00:53:09,180
policy assignments, RBAC baseline, naming, tagging, diagnostic settings,
491
00:53:09,180 --> 00:53:14,580
the boring parts that stop every future workload from becoming a bespoke argument.
492
00:53:14,580 --> 00:53:18,180
Second, connectivity is implemented as a standard service, not a per-app adventure.
493
00:53:18,180 --> 00:53:23,180
Huppenspoke, one hub per region, and a routing model you can explain without apologizing.
494
00:53:23,180 --> 00:53:28,380
This is where you decide egress control, DNS patterns, private endpoint strategy, and hybrid termination.
495
00:53:28,380 --> 00:53:31,380
If you don't define these up front, every team will define them for you.
496
00:53:31,380 --> 00:53:34,980
Badly, third, identity is locked down as a platform capability.
497
00:53:34,980 --> 00:53:39,980
Single tenant by default, central ownership, standard onboarding for apps, PIM for privileged access.
498
00:53:39,980 --> 00:53:44,780
Conditional access, baselines that reflect your risk profile, not whatever got copy-pasted last year.
499
00:53:44,780 --> 00:53:49,780
Identity is where you decide whether your security model is deterministic or probabilistic.
500
00:53:49,780 --> 00:53:53,380
Fourth, policy as code becomes the muscle, not an aspiration.
501
00:53:53,380 --> 00:53:55,580
Azure policy isn't compliance reporting.
502
00:53:55,580 --> 00:54:00,380
It's the enforcement mechanism that keeps your platform contract from decaying into exceptions.
503
00:54:00,380 --> 00:54:03,780
Tag policies, location restrictions, deny public endpoints,
504
00:54:03,780 --> 00:54:07,980
where rational, mandatory diagnostic settings, required encryption settings,
505
00:54:07,980 --> 00:54:09,580
and a clear rule for exceptions.
506
00:54:09,580 --> 00:54:15,780
They are entropy generators, and every entropy generator has an owner, a ticket, an expiry date, and compensating controls.
507
00:54:15,780 --> 00:54:17,580
Now, the migration factory prerequisites.
508
00:54:17,580 --> 00:54:20,780
This is the part leaders love to skip because it looks like process.
509
00:54:20,780 --> 00:54:25,380
But the factory is what converts migration from hero team to organizational throughput.
510
00:54:25,380 --> 00:54:26,780
So you define standards.
511
00:54:26,780 --> 00:54:29,980
What a migrated workload must have before it's considered done.
512
00:54:29,980 --> 00:54:35,180
Logging integrated, alerts rooted, backup strategy defined, identity pattern compliant,
513
00:54:35,180 --> 00:54:40,380
network paths predictable, documentation updated, ownership assigned, cost tags present,
514
00:54:40,380 --> 00:54:43,780
a runbook that can survive the original engineer going on leave.
515
00:54:43,780 --> 00:54:48,380
You define runbooks, cutover steps, validation steps, and rollback steps.
516
00:54:48,380 --> 00:54:49,580
Not will figure it out.
517
00:54:49,580 --> 00:54:55,180
Written steps with owners, you define validation criteria, functional validation, performance baselines,
518
00:54:55,180 --> 00:54:57,780
security posture checks, and business sign off gates.
519
00:54:57,780 --> 00:55:02,980
If you don't define validation, you will accept it seems fine, and seems fine is how you get audited.
520
00:55:02,980 --> 00:55:06,580
You define rollback patterns, not a one off per migration, but a repeatable approach.
521
00:55:06,580 --> 00:55:10,980
Parallel runs where possible, traffic cutover strategy, data consistency validation,
522
00:55:10,980 --> 00:55:15,580
and explicit conditions that trigger rollback without debate, and then you build the operating rhythm.
523
00:55:15,580 --> 00:55:19,980
Phase one isn't just building assets, it's establishing the discipline to keep them stable.
524
00:55:19,980 --> 00:55:24,180
Code reviews, change control, deployment pipelines for the platform itself,
525
00:55:24,180 --> 00:55:29,180
and a cadence for policy updates that doesn't turn governance into surprise outages.
526
00:55:29,180 --> 00:55:33,380
Now tie it back to financial services because that's where the lie gets exposed fastest.
527
00:55:33,380 --> 00:55:37,180
Finance doesn't care that your landing zone is aligned with best practices.
528
00:55:37,180 --> 00:55:41,380
Finance cares that you can prove separation of duties, that privileged access is controlled,
529
00:55:41,380 --> 00:55:44,380
that evidence exists, and that changes are traceable.
530
00:55:44,380 --> 00:55:48,380
So the foundation must include auditability as a first class requirement,
531
00:55:48,380 --> 00:55:49,580
not a future enhancement.
532
00:55:49,580 --> 00:55:52,180
Separation of duties isn't an org chart statement.
533
00:55:52,180 --> 00:55:56,580
It's in force pathways, platform owners manage the contract, application teams consume it,
534
00:55:56,580 --> 00:56:01,780
security tooling stays protected, and approvals are engineered into pipelines where it matters.
535
00:56:01,780 --> 00:56:04,180
This is also where you make peace with the real trade-off.
536
00:56:04,180 --> 00:56:06,580
Governance isn't free, but neither is chaos.
537
00:56:06,580 --> 00:56:09,780
If you want faster time to market, you don't remove guardrails.
538
00:56:09,780 --> 00:56:12,180
You standardize them, so teams stop negotiating them.
539
00:56:12,180 --> 00:56:14,580
If you want stability, you don't rely on heroics.
540
00:56:14,580 --> 00:56:17,380
You rely on repeatable environments and predictable controls.
541
00:56:17,380 --> 00:56:20,580
If you want predictable costs to serve, you don't hope people tag resources.
542
00:56:20,580 --> 00:56:22,980
You enforce tags, budgets, and visibility.
543
00:56:22,980 --> 00:56:26,180
Phase one is the moment you decide whether a viewer will reduce entropy
544
00:56:26,180 --> 00:56:27,780
or just host it more expensively.
545
00:56:27,780 --> 00:56:31,780
So make phase one non-negotiable, ship the landing zone, ship the shared services,
546
00:56:31,780 --> 00:56:36,180
ship the identity and policy contract, ship the network, ship the factory definitions,
547
00:56:36,180 --> 00:56:39,380
then, and only then, you start the assembly line.
548
00:56:39,380 --> 00:56:43,780
Phase two, migration at scale, factory model plus repeatable validation.
549
00:56:43,780 --> 00:56:47,380
Phase two is where the fantasy either becomes a delivery system
550
00:56:47,380 --> 00:56:50,180
or it becomes a backlog with nicer branding.
551
00:56:50,180 --> 00:56:54,180
This is the migration factory, an assembly line that produces consistent outcomes,
552
00:56:54,180 --> 00:56:55,780
not heroic one-offs.
553
00:56:55,780 --> 00:56:58,180
And the only way it works is if you standardize the boring parts
554
00:56:58,180 --> 00:57:01,380
so teams can spend their creativity on the parts that actually matter.
555
00:57:01,380 --> 00:57:03,780
Every migration wave should run the same cadence,
556
00:57:03,780 --> 00:57:08,980
discover map dependencies, decide the path, move, validate, stabilize,
557
00:57:08,980 --> 00:57:12,980
not because you love process, because repeatability is how you prevent entropy
558
00:57:12,980 --> 00:57:15,780
from scaling faster than your governance.
559
00:57:15,780 --> 00:57:18,180
Discovery isn't, we found the servers.
560
00:57:18,180 --> 00:57:22,580
Its application dependency reality, identity flows, DNS assumptions,
561
00:57:22,580 --> 00:57:26,580
inbound and outbound traffic, certificates, batch schedules, file drops,
562
00:57:26,580 --> 00:57:29,380
and the quiet integration points that don't show up in diagrams
563
00:57:29,380 --> 00:57:32,180
but absolutely show up during cutover.
564
00:57:32,180 --> 00:57:35,780
Then you decide the path explicitly, re-host, re-platform, re-factor, retire,
565
00:57:35,780 --> 00:57:36,980
no hybrid euphemisms.
566
00:57:36,980 --> 00:57:40,180
If you can't say which one it is, you're about to accidentally re-factor a workload
567
00:57:40,180 --> 00:57:42,180
you meant to re-host, then you move.
568
00:57:42,180 --> 00:57:44,980
But the move is not the milestone, validation is the milestone.
569
00:57:44,980 --> 00:57:48,180
Validation has to be standardized, ruthless and boring,
570
00:57:48,180 --> 00:57:49,780
and it needs four gates.
571
00:57:49,780 --> 00:57:53,380
First, completeness. Did every required component land where it should
572
00:57:53,380 --> 00:57:57,380
with the right dependencies and the right subscription under the right management group
573
00:57:57,380 --> 00:57:59,380
inheriting the right policies?
574
00:57:59,380 --> 00:58:01,780
Second, performance baseline. Not it seems fast.
575
00:58:01,780 --> 00:58:04,580
Baseline it against what the business expects, then measure.
576
00:58:04,580 --> 00:58:08,980
If performance changes, you explain why, and you decide whether that change is acceptable
577
00:58:08,980 --> 00:58:10,980
before you declare success.
578
00:58:10,980 --> 00:58:14,180
Third, security posture, identity patterns, compliant network parts,
579
00:58:14,180 --> 00:58:16,580
predictable logs flowing to the expected destinations,
580
00:58:16,580 --> 00:58:18,980
and alerts routed to the right responders.
581
00:58:18,980 --> 00:58:23,380
If the workload can't be monitored and investigated, it is not migrated,
582
00:58:23,380 --> 00:58:24,180
it is hidden.
583
00:58:24,180 --> 00:58:25,780
Fourth, business sign off.
584
00:58:25,780 --> 00:58:28,580
The people who own outcomes, not the people who own terraform,
585
00:58:28,580 --> 00:58:32,580
confirm the workflow is correct, after validation comes stabilization,
586
00:58:32,580 --> 00:58:34,580
and this is where most programs lie to themselves.
587
00:58:34,580 --> 00:58:37,780
They say they'll optimize later, later becomes never.
588
00:58:37,780 --> 00:58:42,180
So you create an optimization backlog at the end of each wave with owners and dates,
589
00:58:42,180 --> 00:58:46,580
right sizing, reserved capacity decisions, logging noise reduction,
590
00:58:46,580 --> 00:58:49,780
and technical debt you intentionally carried during re-host.
591
00:58:49,780 --> 00:58:53,780
If nobody owns it, it will rot in production forever.
592
00:58:53,780 --> 00:58:55,780
And here's the trap to avoid.
593
00:58:55,780 --> 00:58:57,780
Don't let the factory become an approval board.
594
00:58:57,780 --> 00:58:58,980
That's not scale.
595
00:58:58,980 --> 00:59:00,980
That's bureaucracy.
596
00:59:00,980 --> 00:59:03,380
The factory exists to produce paved roads,
597
00:59:03,380 --> 00:59:06,180
self-service subscription vending, pre-wired monitoring,
598
00:59:06,180 --> 00:59:09,780
default network integration, and enforcement through policy.
599
00:59:09,780 --> 00:59:12,180
Teams should be able to onboard without begging,
600
00:59:12,180 --> 00:59:15,380
but they should be unable to bypass guardrails without leaving evidence.
601
00:59:15,380 --> 00:59:16,580
That's the balance.
602
00:59:16,580 --> 00:59:19,380
Migration at scale is not speed, it's throughput without drift.
603
00:59:19,380 --> 00:59:21,380
And if you can't do repeatable validation,
604
00:59:21,380 --> 00:59:24,180
you're not scaling migration, you're scaling risk.
605
00:59:24,180 --> 00:59:28,580
Phase three, modernization, manage services, containers, event driven.
606
00:59:28,580 --> 00:59:32,180
Phase three is where everyone finally gets what they asked for on day one,
607
00:59:32,180 --> 00:59:33,380
cloud native.
608
00:59:33,380 --> 00:59:36,180
And this is where most programs either compound value,
609
00:59:36,180 --> 00:59:37,780
or light the runway on fire,
610
00:59:37,780 --> 00:59:40,980
because modernization is not an activity, it's a consequence.
611
00:59:40,980 --> 00:59:43,780
When phase one and phase two are real,
612
00:59:43,780 --> 00:59:47,380
landing zones in forced shared services, stable factory validation,
613
00:59:47,380 --> 00:59:50,580
repeatable, modernization stops being a religious war
614
00:59:50,580 --> 00:59:53,380
and becomes a portfolio of deliberate bets.
615
00:59:53,380 --> 00:59:55,380
And the bets usually fall into three buckets,
616
00:59:55,380 --> 00:59:58,380
managed services, containers, and event driven integration.
617
00:59:58,380 --> 01:00:01,980
Start with managed services, because that's the most boring kind of modernization.
618
01:00:01,980 --> 01:00:04,580
And boring is usually where the ROI hides.
619
01:00:04,580 --> 01:00:09,180
Moving a SQL server that nobody wants to patch into a managed database isn't innovative,
620
01:00:09,180 --> 01:00:11,380
but it collapses your operational burden.
621
01:00:11,380 --> 01:00:15,580
Moving identity, handling out of custom code into standardized identity patterns
622
01:00:15,580 --> 01:00:18,780
isn't sexy, but it reduces incident time.
623
01:00:18,780 --> 01:00:22,780
Replacing DIY backup scripts with platform aligned backup and retention patterns
624
01:00:22,780 --> 01:00:26,380
isn't a transformation story, but it's the difference between a contained outage
625
01:00:26,380 --> 01:00:27,980
and a multi-day recovery.
626
01:00:27,980 --> 01:00:31,180
Managed services are modernization that buys back human time.
627
01:00:31,180 --> 01:00:33,780
And human time is the scarce resource in the program.
628
01:00:33,780 --> 01:00:35,980
So when you modernize into managed services,
629
01:00:35,980 --> 01:00:37,180
you're doing two things at once,
630
01:00:37,180 --> 01:00:39,780
reducing costs to serve and making risk more predictable.
631
01:00:39,780 --> 01:00:42,580
You're removing snowflake operational work from the workload teams
632
01:00:42,580 --> 01:00:45,380
and pushing it into the platform's standard capabilities.
633
01:00:45,380 --> 01:00:48,780
That's why phase three can only happen after the platform is stable.
634
01:00:48,780 --> 01:00:52,380
If the platform isn't stable, managed services don't simplify.
635
01:00:52,380 --> 01:00:53,780
They multiply confusion.
636
01:00:53,780 --> 01:00:56,780
Teams don't know what supported security doesn't know how to monitor it
637
01:00:56,780 --> 01:00:58,980
and operations doesn't know how to recover it.
638
01:00:58,980 --> 01:01:00,580
Now containers, here's the uncomfortable truth.
639
01:01:00,580 --> 01:01:02,980
Containerization is not modernization by default.
640
01:01:02,980 --> 01:01:03,980
It's packaging.
641
01:01:03,980 --> 01:01:05,380
Sometimes it's the right packaging.
642
01:01:05,380 --> 01:01:08,180
Sometimes it's just a more complicated way to run the same mess.
643
01:01:08,180 --> 01:01:10,980
Containers earn their keep when you need repeatable deployments,
644
01:01:10,980 --> 01:01:13,380
consistent runtime environments and scaling patterns
645
01:01:13,380 --> 01:01:15,380
that VMs never gave you cleanly.
646
01:01:15,380 --> 01:01:18,980
They also earn their keep when you're trying to decouple release velocity
647
01:01:18,980 --> 01:01:22,580
from infrastructure change because when your deployment becomes push a container,
648
01:01:22,580 --> 01:01:24,780
infrastructure stops being a negotiation.
649
01:01:24,780 --> 01:01:27,180
But containerization becomes a self-inflicted wound
650
01:01:27,180 --> 01:01:30,180
when teams use it to avoid making architectural decisions.
651
01:01:30,180 --> 01:01:32,580
They take a legacy monolith, put it in a container,
652
01:01:32,580 --> 01:01:34,180
drop it onto an orchestrator,
653
01:01:34,180 --> 01:01:36,380
and then act surprised when nothing got easier.
654
01:01:36,380 --> 01:01:37,780
The same coupling exists.
655
01:01:37,780 --> 01:01:39,780
The same database is still the choke point.
656
01:01:39,780 --> 01:01:41,780
The same brittle integration still break.
657
01:01:41,780 --> 01:01:45,380
The only thing that changed is now you have more layers between you and the failure.
658
01:01:45,380 --> 01:01:46,580
So the stance is simple.
659
01:01:46,580 --> 01:01:48,780
Use containers where they reduce operational friction,
660
01:01:48,780 --> 01:01:50,180
not where they increase it.
661
01:01:50,180 --> 01:01:51,780
And be honest about orchestrators.
662
01:01:51,780 --> 01:01:54,380
If you're going to run a platform like Kubernetes,
663
01:01:54,380 --> 01:01:56,180
you're taking on an operating model.
664
01:01:56,180 --> 01:01:59,380
Upgrades, patching, cluster hygiene, networking complexity,
665
01:01:59,380 --> 01:02:01,780
and security posture that requires maturity.
666
01:02:01,780 --> 01:02:03,980
If your organization isn't ready for that,
667
01:02:03,980 --> 01:02:07,580
containerization becomes new tech debt with better marketing.
668
01:02:07,580 --> 01:02:09,380
So now event-driven integration.
669
01:02:09,380 --> 01:02:12,580
This is the modernization path that actually changes how systems behave,
670
01:02:12,580 --> 01:02:14,780
which is why it's also the one that breaks people.
671
01:02:14,780 --> 01:02:17,180
Event-driven doesn't mean we use the message queue.
672
01:02:17,180 --> 01:02:20,180
It means you stop pretending synchronous coupling is a good default.
673
01:02:20,180 --> 01:02:22,980
You stop having systems block on each other's availability.
674
01:02:22,980 --> 01:02:25,780
You stop designing your entire enterprise around the idea
675
01:02:25,780 --> 01:02:28,580
that everything is online and deterministic all the time.
676
01:02:28,580 --> 01:02:30,780
Legacy systems love synchronous coupling,
677
01:02:30,780 --> 01:02:32,180
because it feels like control.
678
01:02:32,180 --> 01:02:34,180
Modern systems survive through decoupling
679
01:02:34,180 --> 01:02:36,180
because reality isn't controllable.
680
01:02:36,180 --> 01:02:38,580
Event-driven patterns reduce blast radius.
681
01:02:38,580 --> 01:02:39,580
They make change safer.
682
01:02:39,580 --> 01:02:41,580
They make integrations more tolerant of failure.
683
01:02:41,580 --> 01:02:43,780
They also make incident response more complex
684
01:02:43,780 --> 01:02:46,780
if your observability is weak, which is why, again,
685
01:02:46,780 --> 01:02:50,380
phase three is a privilege you earn after phase one and phase two are stable.
686
01:02:50,380 --> 01:02:51,780
Because once events start flowing,
687
01:02:51,780 --> 01:02:55,780
you need the ability to answer what happened across multiple services,
688
01:02:55,780 --> 01:02:57,980
multiple subscriptions and multiple teams
689
01:02:57,980 --> 01:03:01,380
without turning the investigation into a weak long archaeology dig.
690
01:03:01,380 --> 01:03:03,180
Now data.
691
01:03:03,180 --> 01:03:05,580
Data modernization is where a lot of migration programs
692
01:03:05,580 --> 01:03:08,380
quietly bleed out because everybody underestimates it.
693
01:03:08,380 --> 01:03:10,780
Large transfers aren't hard because the bites are big.
694
01:03:10,780 --> 01:03:12,780
They're hard because the validation is brutal.
695
01:03:12,780 --> 01:03:14,980
You can move terabytes even petabytes,
696
01:03:14,980 --> 01:03:17,180
but the business doesn't care that you move data.
697
01:03:17,180 --> 01:03:19,780
The business cares that it's complete accurate and usable.
698
01:03:19,780 --> 01:03:21,980
That means orchestration, reconciliation and discipline.
699
01:03:21,980 --> 01:03:24,980
Tools exist for bulk transfer when bandwidth is the constraint
700
01:03:24,980 --> 01:03:27,980
and orchestration exists when coordination is the constraint.
701
01:03:27,980 --> 01:03:29,380
But the principle doesn't change.
702
01:03:29,380 --> 01:03:31,980
You validate before, during and after.
703
01:03:31,980 --> 01:03:34,380
And you don't declare victory because the copy finished.
704
01:03:34,380 --> 01:03:37,580
You declare victory when downstream consumers behave as expected.
705
01:03:37,580 --> 01:03:39,780
Now connect this back to the strategic insight
706
01:03:39,780 --> 01:03:42,980
because this is the part that keeps the program from self-destructing.
707
01:03:42,980 --> 01:03:45,980
Modernization is a lagging indicator of platform maturity.
708
01:03:45,980 --> 01:03:47,780
If your platform contract is enforced,
709
01:03:47,780 --> 01:03:50,780
modernization compounds, patterns get reused,
710
01:03:50,780 --> 01:03:53,580
teams on board faster, controls remain consistent,
711
01:03:53,580 --> 01:03:56,780
cost variance drops because you standardize what good looks like.
712
01:03:56,780 --> 01:03:58,580
If your platform contract is weak,
713
01:03:58,580 --> 01:04:00,580
modernization becomes probabilistic.
714
01:04:00,580 --> 01:04:02,180
Every team builds a different pattern,
715
01:04:02,180 --> 01:04:05,580
exceptions proliferate and cloud native becomes a synonym for.
716
01:04:05,580 --> 01:04:07,980
We don't know what this will cost or how it will fail.
717
01:04:07,980 --> 01:04:10,980
So phase three is not now we refactor everything.
718
01:04:10,980 --> 01:04:14,780
Phase three is selective modernization where the platform can carry it
719
01:04:14,780 --> 01:04:16,380
and where the business case is explicit.
720
01:04:16,380 --> 01:04:18,780
You modernize the workloads that are differentiators.
721
01:04:18,780 --> 01:04:21,780
You re-platform the workloads that benefit from managed services.
722
01:04:21,780 --> 01:04:24,380
You keep re-hosting where stability is the priority.
723
01:04:24,380 --> 01:04:26,580
And you delete what should never have existed.
724
01:04:26,580 --> 01:04:28,180
And if you do it in that order,
725
01:04:28,180 --> 01:04:30,180
modernization stops being a risky leap.
726
01:04:30,180 --> 01:04:32,780
It becomes the payoff for doing the boring parts correctly.
727
01:04:32,780 --> 01:04:35,780
Failure story, the premature refactor that stalled the program,
728
01:04:35,780 --> 01:04:38,380
a financial services firm kicked off their cloud program
729
01:04:38,380 --> 01:04:41,580
with the most common executive mandate were going cloud native.
730
01:04:41,580 --> 01:04:43,380
No lift and shift were not paying twice,
731
01:04:43,380 --> 01:04:44,980
refactor everything.
732
01:04:44,980 --> 01:04:46,780
Leadership intent sounded rational.
733
01:04:46,780 --> 01:04:49,180
They wanted modernization benefits immediately.
734
01:04:49,180 --> 01:04:51,780
They wanted to avoid carrying legacy debt into Azure.
735
01:04:51,780 --> 01:04:53,380
They wanted the story to be clean.
736
01:04:53,380 --> 01:04:56,180
One move, one transformation, no half measures.
737
01:04:56,180 --> 01:04:58,380
What actually happened was a slow motion stall.
738
01:04:58,380 --> 01:05:00,180
The first few workloads didn't move.
739
01:05:00,180 --> 01:05:01,580
They entered discovery.
740
01:05:01,580 --> 01:05:03,580
Then discovery turned into architecture reviews.
741
01:05:03,580 --> 01:05:06,380
Then architecture reviews turned into standards debates.
742
01:05:06,380 --> 01:05:09,580
Then those debates turned into new platform requirements.
743
01:05:09,580 --> 01:05:12,580
New ingress patterns, new identity flows,
744
01:05:12,580 --> 01:05:14,980
new logging format, new container strategy,
745
01:05:14,980 --> 01:05:17,380
new data model rules, all at once.
746
01:05:17,380 --> 01:05:20,980
Meanwhile, the platform team hadn't finished the landing zone contract.
747
01:05:20,980 --> 01:05:22,380
Networking was still in flux.
748
01:05:22,380 --> 01:05:24,180
Policies were still being negotiated.
749
01:05:24,180 --> 01:05:25,780
Observability wasn't standardized.
750
01:05:25,780 --> 01:05:27,580
Subscription vending wasn't real.
751
01:05:27,580 --> 01:05:30,780
So every cloud native decision required a new exception.
752
01:05:30,780 --> 01:05:33,380
And exceptions aren't acceleration, they're debt.
753
01:05:33,380 --> 01:05:36,180
Nothing broke technically, everything broke systemically.
754
01:05:36,180 --> 01:05:39,380
The invisible constraint they ignored was platform immaturity.
755
01:05:39,380 --> 01:05:42,180
When your platform can't enforce consistent identity,
756
01:05:42,180 --> 01:05:45,180
network paths, policy and observability,
757
01:05:45,180 --> 01:05:47,180
refactoring becomes probabilistic.
758
01:05:47,180 --> 01:05:49,980
The same refactor looks different in every team's hands.
759
01:05:49,980 --> 01:05:53,180
The same security requirement produces different access models.
760
01:05:53,180 --> 01:05:55,980
The same deployment objective produces different pipelines.
761
01:05:55,980 --> 01:05:58,380
You get fragmentation disguised as innovation.
762
01:05:58,380 --> 01:06:01,580
The program also discovered another constraint executives hate.
763
01:06:01,580 --> 01:06:03,780
Refactoring consumes your best people first.
764
01:06:03,780 --> 01:06:06,580
The engineers who could have built repeatable patterns were stuck
765
01:06:06,580 --> 01:06:08,780
rewriting one application's internals.
766
01:06:08,780 --> 01:06:11,980
Operations couldn't stabilize because nothing was landing consistently.
767
01:06:11,980 --> 01:06:13,780
Security couldn't standardize monitoring
768
01:06:13,780 --> 01:06:15,780
because every workload looked different.
769
01:06:15,780 --> 01:06:16,780
And because nothing shipped,
770
01:06:16,780 --> 01:06:19,180
leadership started asking for more velocity,
771
01:06:19,180 --> 01:06:21,380
which translated into more parallel work streams,
772
01:06:21,380 --> 01:06:23,180
which translated into more drift.
773
01:06:23,180 --> 01:06:25,980
Then the business got impatient, they still had data center deadlines,
774
01:06:25,980 --> 01:06:27,780
they still had vendor support timelines,
775
01:06:27,780 --> 01:06:30,180
they still had hardware renewals, so they pivoted.
776
01:06:30,180 --> 01:06:32,380
Fine, just lift and shift the easy stuff.
777
01:06:32,380 --> 01:06:34,380
Now they had the worst of both worlds,
778
01:06:34,380 --> 01:06:35,980
half-built refactors,
779
01:06:35,980 --> 01:06:38,380
a rushed re-host wave and a platform contract
780
01:06:38,380 --> 01:06:41,980
that never fully hardened because it kept getting overwritten by urgency.
781
01:06:41,980 --> 01:06:43,980
The principle that would have prevented it is boring
782
01:06:43,980 --> 01:06:46,580
and absolute sequence modernization after stability,
783
01:06:46,580 --> 01:06:48,780
re-host or re-platform to build throughput,
784
01:06:48,780 --> 01:06:51,780
prove guardrails and stabilize operating patterns.
785
01:06:51,780 --> 01:06:54,580
Then refactor selectively where the business case is real
786
01:06:54,580 --> 01:06:56,180
and the platform can carry it.
787
01:06:56,180 --> 01:06:59,180
Modernization is a lagging indicator of platform maturity.
788
01:06:59,180 --> 01:07:01,380
If you force it early, you don't modernize faster.
789
01:07:01,380 --> 01:07:02,980
You just fail in more expensive ways.
790
01:07:02,980 --> 01:07:05,380
And that leads to the part most people try to avoid
791
01:07:05,380 --> 01:07:07,180
because it implicates everyone in the room.
792
01:07:07,180 --> 01:07:09,380
Cloud migration doesn't fail in Terraform.
793
01:07:09,380 --> 01:07:10,780
It fails in org charts.
794
01:07:10,780 --> 01:07:14,380
Organizational reality, this fails in org charts, not Terraform.
795
01:07:14,380 --> 01:07:16,980
The foundational mistake in most enterprise migrations
796
01:07:16,980 --> 01:07:19,380
is assuming the hard part is technical execution.
797
01:07:19,380 --> 01:07:20,380
It isn't.
798
01:07:20,380 --> 01:07:23,180
The hard part is that as you are forces an operating model,
799
01:07:23,180 --> 01:07:24,780
whether you design one or not,
800
01:07:24,780 --> 01:07:26,180
if you don't design it, you still get one.
801
01:07:26,180 --> 01:07:28,180
It's just accidental.
802
01:07:28,180 --> 01:07:30,180
That's what Cloud sprawl actually is.
803
01:07:30,180 --> 01:07:33,180
Accidental organizational design express the subscriptions,
804
01:07:33,180 --> 01:07:36,780
role assignments, exceptions and half-owned services.
805
01:07:36,780 --> 01:07:38,580
Over time, it becomes permanent
806
01:07:38,580 --> 01:07:41,180
because nobody can unwind it without breaking something.
807
01:07:41,180 --> 01:07:42,780
So let's talk about the real system,
808
01:07:42,780 --> 01:07:44,780
platform teams versus product teams.
809
01:07:44,780 --> 01:07:46,780
A platform team owns the contract,
810
01:07:46,780 --> 01:07:50,980
landing zones, identity boundaries, network, policy, logging,
811
01:07:50,980 --> 01:07:53,380
and the paved roads that let delivery teams move
812
01:07:53,380 --> 01:07:55,580
without negotiating every constraint.
813
01:07:55,580 --> 01:07:58,580
They build the control plane behavior of the enterprise,
814
01:07:58,580 --> 01:08:01,380
product teams own workloads, features, business outcomes,
815
01:08:01,380 --> 01:08:03,380
and the operational responsibility for their service
816
01:08:03,380 --> 01:08:05,780
inside the boundaries the platform provides.
817
01:08:05,780 --> 01:08:08,180
When that boundaries unclear everything degrades,
818
01:08:08,180 --> 01:08:09,780
platform teams become ticket cues,
819
01:08:09,780 --> 01:08:12,180
product teams become policy negotiators.
820
01:08:12,180 --> 01:08:14,180
And governance turns into conditional chaos.
821
01:08:14,180 --> 01:08:17,180
This is why a lot of cloud centers of excellence fail.
822
01:08:17,180 --> 01:08:19,580
Leadership intends, centralized expertise,
823
01:08:19,580 --> 01:08:21,780
reduce risk, standardize patterns.
824
01:08:21,780 --> 01:08:24,980
What actually happens is the CCOE becomes an approval board.
825
01:08:24,980 --> 01:08:27,380
They review every request, they gate every deployment,
826
01:08:27,380 --> 01:08:29,380
they become the human API for cloud access.
827
01:08:29,380 --> 01:08:30,980
That feels safe for about three weeks
828
01:08:30,980 --> 01:08:32,780
until delivery pressure rises.
829
01:08:32,780 --> 01:08:35,780
Then teams route around them, they create shadow subscriptions.
830
01:08:35,780 --> 01:08:37,380
They deploy outside guardrails,
831
01:08:37,380 --> 01:08:38,780
they do whatever they need to ship
832
01:08:38,780 --> 01:08:40,580
because the business rewards outcomes
833
01:08:40,580 --> 01:08:41,780
not compliance narratives.
834
01:08:41,780 --> 01:08:43,380
And once that bypass exists,
835
01:08:43,380 --> 01:08:44,980
it becomes your real operating model.
836
01:08:44,980 --> 01:08:46,580
You can't govern what you can't see.
837
01:08:46,580 --> 01:08:49,980
So a good CCOE if you insist on having one is not a gatekeeper.
838
01:08:49,980 --> 01:08:53,180
It is a platform product team with a backlog service level objectives
839
01:08:53,180 --> 01:08:55,380
and automation as the default interface.
840
01:08:55,380 --> 01:08:56,780
It produces paved roads,
841
01:08:56,780 --> 01:08:58,580
self-service subscription vending,
842
01:08:58,580 --> 01:09:00,380
enforced policy baselines,
843
01:09:00,380 --> 01:09:02,980
standard logging, standard network integration,
844
01:09:02,980 --> 01:09:06,180
and identity patterns that don't require negotiation.
845
01:09:06,180 --> 01:09:07,380
It doesn't approve work,
846
01:09:07,380 --> 01:09:08,380
it designs the system,
847
01:09:08,380 --> 01:09:10,180
so safe work is the easiest work,
848
01:09:10,180 --> 01:09:11,780
that distinction matters.
849
01:09:11,780 --> 01:09:12,980
Now, Finops.
850
01:09:12,980 --> 01:09:14,580
Most executives here, Finops,
851
01:09:14,580 --> 01:09:16,180
and think it means save money.
852
01:09:16,180 --> 01:09:17,980
That framing is childish.
853
01:09:17,980 --> 01:09:19,580
Finops is a governance muscle.
854
01:09:19,580 --> 01:09:21,580
It's the mechanism that turns cloud economics
855
01:09:21,580 --> 01:09:24,180
from surprise invoices into predictable unit economics.
856
01:09:24,180 --> 01:09:25,780
It's how you measure costs to serve,
857
01:09:25,780 --> 01:09:26,980
not just raw spend.
858
01:09:26,980 --> 01:09:29,480
Cost to serve is the metric that actually matters
859
01:09:29,480 --> 01:09:30,980
in enterprise migrations.
860
01:09:30,980 --> 01:09:33,680
What does it cost to run this workload reliably,
861
01:09:33,680 --> 01:09:35,780
securely and compiliently,
862
01:09:35,780 --> 01:09:37,080
per transaction,
863
01:09:37,080 --> 01:09:38,180
per customer,
864
01:09:38,180 --> 01:09:39,180
per policy,
865
01:09:39,180 --> 01:09:40,180
per claim,
866
01:09:40,180 --> 01:09:41,180
per plant,
867
01:09:41,180 --> 01:09:42,780
per whatever your business sells.
868
01:09:42,780 --> 01:09:43,780
If you don't track that,
869
01:09:43,780 --> 01:09:45,580
you can't tell whether modernization is working.
870
01:09:45,580 --> 01:09:47,180
You can only tell whether the bill went up,
871
01:09:47,180 --> 01:09:48,380
and the bill always goes up early
872
01:09:48,380 --> 01:09:49,780
if you're doing anything real.
873
01:09:49,780 --> 01:09:51,580
So Finops has to sit inside governance,
874
01:09:51,580 --> 01:09:52,580
not outside it.
875
01:09:52,580 --> 01:09:54,580
It needs tagging standards that are enforced,
876
01:09:54,580 --> 01:09:55,580
not suggested.
877
01:09:55,580 --> 01:09:57,580
It needs budgets and anomaly detection
878
01:09:57,580 --> 01:10:00,080
that actually land with the people who can act.
879
01:10:00,080 --> 01:10:01,780
It needs consistent subscription boundaries
880
01:10:01,780 --> 01:10:03,780
so you can compare workloads fairly,
881
01:10:03,780 --> 01:10:05,380
and it needs executives who understand
882
01:10:05,380 --> 01:10:07,780
that predictability beats savings theater.
883
01:10:07,780 --> 01:10:09,380
Now, let's deal with the talent lie.
884
01:10:09,380 --> 01:10:11,580
Most organizations try to solve cloud migrations
885
01:10:11,580 --> 01:10:12,580
with hero teams.
886
01:10:12,580 --> 01:10:13,580
One or two experts,
887
01:10:13,580 --> 01:10:14,580
a tiger team,
888
01:10:14,580 --> 01:10:15,580
a partner group,
889
01:10:15,580 --> 01:10:17,980
a central platform crew that handles Azure,
890
01:10:17,980 --> 01:10:19,780
that model fails because it doesn't scale.
891
01:10:19,780 --> 01:10:21,180
Cloud skills have to distribute,
892
01:10:21,180 --> 01:10:22,580
patterns have to standardize.
893
01:10:22,580 --> 01:10:24,380
Cognitive load has to drop
894
01:10:24,380 --> 01:10:26,380
because your migration throughput is not a function
895
01:10:26,380 --> 01:10:28,080
of how good your best engineer is.
896
01:10:28,080 --> 01:10:29,980
It's a function of how safe your average team
897
01:10:29,980 --> 01:10:31,580
can be with the defaults.
898
01:10:31,580 --> 01:10:33,180
That's what landing zones and paved roads
899
01:10:33,180 --> 01:10:34,180
actually buy you,
900
01:10:34,180 --> 01:10:37,480
the ability for non-hero teams to ship safely inside guardrails.
901
01:10:37,480 --> 01:10:40,380
And this is why org design matters so much.
902
01:10:40,380 --> 01:10:42,980
If you're operating model rewards speed over compliance,
903
01:10:42,980 --> 01:10:44,580
you will get bypasses.
904
01:10:44,580 --> 01:10:46,480
If your platform team is underfunded,
905
01:10:46,480 --> 01:10:48,580
product teams will invent their own platforms.
906
01:10:48,580 --> 01:10:50,280
If your identity team treats exceptions
907
01:10:50,280 --> 01:10:51,480
as customer service,
908
01:10:51,480 --> 01:10:53,180
you will get parallel auth parts.
909
01:10:53,180 --> 01:10:55,080
And if leadership measures apps migrated
910
01:10:55,080 --> 01:10:57,180
instead of time to market stability,
911
01:10:57,180 --> 01:11:00,080
cost to serve predictability and organizational throughput,
912
01:11:00,080 --> 01:11:01,980
you will get motion without progress.
913
01:11:01,980 --> 01:11:03,280
The uncomfortable truth is that
914
01:11:03,280 --> 01:11:04,580
Azure doesn't break the business.
915
01:11:04,580 --> 01:11:07,180
Your organization breaks itself against Azure
916
01:11:07,180 --> 01:11:08,980
because the old habits don't map cleanly.
917
01:11:08,980 --> 01:11:10,480
So if you want migration success,
918
01:11:10,480 --> 01:11:12,180
you don't start with a tool chain.
919
01:11:12,180 --> 01:11:14,580
You start with a durable division of responsibilities,
920
01:11:14,580 --> 01:11:17,280
platform as contract, product as execution,
921
01:11:17,280 --> 01:11:18,780
governance as enforced intent,
922
01:11:18,780 --> 01:11:20,280
everything else is just the platform
923
01:11:20,280 --> 01:11:22,380
reflecting your org chart back at you.
924
01:11:22,380 --> 01:11:23,980
And it will not flatter you.
925
01:11:23,980 --> 01:11:26,980
Failure story, the CCOE that became a gatekeeper,
926
01:11:26,980 --> 01:11:28,780
a large financial services organization
927
01:11:28,780 --> 01:11:30,580
stood up a cloud center of excellence
928
01:11:30,580 --> 01:11:33,180
with the best possible intentions.
929
01:11:33,180 --> 01:11:36,480
Leadership intent was clean, centralized expertise,
930
01:11:36,480 --> 01:11:39,580
reduced risk, stopped teams from doing random things in Azure.
931
01:11:39,580 --> 01:11:40,480
Make it safe.
932
01:11:40,480 --> 01:11:42,880
So they staffed the CCOE with smart people,
933
01:11:42,880 --> 01:11:44,580
architects, security leads,
934
01:11:44,580 --> 01:11:47,480
a few engineers who had survived prior migrations,
935
01:11:47,480 --> 01:11:49,280
they wrote standards, they created templates,
936
01:11:49,280 --> 01:11:50,480
they built a review process,
937
01:11:50,480 --> 01:11:52,280
what actually happened was predictable.
938
01:11:52,280 --> 01:11:54,380
The CCOE became the human control plane.
939
01:11:54,380 --> 01:11:56,480
Every subscription request became a ticket,
940
01:11:56,480 --> 01:11:58,280
every network change became a meeting,
941
01:11:58,280 --> 01:12:00,680
every exception became a negotiation.
942
01:12:00,680 --> 01:12:02,580
And because the CCOE owned cloud,
943
01:12:02,580 --> 01:12:04,280
product teams stopped owning outcomes,
944
01:12:04,280 --> 01:12:06,080
they started owning escalation parts.
945
01:12:06,080 --> 01:12:08,180
The backlog didn't just grow, it metastasized
946
01:12:08,180 --> 01:12:10,980
because demand scales faster than centralized approvals.
947
01:12:10,980 --> 01:12:13,180
Then came the first real delivery deadline,
948
01:12:13,180 --> 01:12:15,680
a regulatory change, a customer facing feature,
949
01:12:15,680 --> 01:12:17,380
something the business actually cared about.
950
01:12:17,380 --> 01:12:18,880
Teams did what teams always do
951
01:12:18,880 --> 01:12:20,880
when they're blocked by a centralized gate,
952
01:12:20,880 --> 01:12:22,380
they went around it.
953
01:12:22,380 --> 01:12:23,880
A temporary subscription appeared
954
01:12:23,880 --> 01:12:25,180
under the root management group
955
01:12:25,180 --> 01:12:27,380
because someone had permissions they shouldn't have.
956
01:12:27,380 --> 01:12:29,080
A temporary service principle got owner
957
01:12:29,080 --> 01:12:31,680
because it was faster than waiting for PIM and access reviews.
958
01:12:31,680 --> 01:12:34,380
A temporary firewall exception got added
959
01:12:34,380 --> 01:12:37,180
because the hub team couldn't get to the request this week.
960
01:12:37,180 --> 01:12:38,680
And now the organization had two clouds,
961
01:12:38,680 --> 01:12:40,580
the governed one in the landing zone
962
01:12:40,580 --> 01:12:42,480
and the real one where delivery happened,
963
01:12:42,480 --> 01:12:45,580
nothing broke technically, everything broke systemically.
964
01:12:45,580 --> 01:12:47,780
The invisible constraint they ignored was simple,
965
01:12:47,780 --> 01:12:49,580
centralized approvals don't scale
966
01:12:49,580 --> 01:12:51,980
and they create a market for ungoverned workarounds.
967
01:12:51,980 --> 01:12:53,580
The CCOE didn't reduce risk,
968
01:12:53,580 --> 01:12:54,980
it concentrated decision making
969
01:12:54,980 --> 01:12:56,980
until teams had to bypass it to ship,
970
01:12:56,980 --> 01:12:58,780
that bypass became the default behavior
971
01:12:58,780 --> 01:12:59,980
and because it wasn't designed,
972
01:12:59,980 --> 01:13:01,180
it wasn't observable,
973
01:13:01,180 --> 01:13:02,980
security couldn't monitor consistently.
974
01:13:02,980 --> 01:13:04,580
Finance couldn't forecast reliably,
975
01:13:04,580 --> 01:13:06,380
audit couldn't explain the control narrative
976
01:13:06,380 --> 01:13:07,880
without apologizing,
977
01:13:07,880 --> 01:13:10,080
then the CCOE reacted the way approval boards
978
01:13:10,080 --> 01:13:11,280
always react more forms,
979
01:13:11,280 --> 01:13:13,580
more gates, more documentation, more friction,
980
01:13:13,580 --> 01:13:15,280
which increased bypass behavior.
981
01:13:15,280 --> 01:13:16,780
That loop feeds itself.
982
01:13:16,780 --> 01:13:18,480
The principle that would have prevented it
983
01:13:18,480 --> 01:13:20,580
is the same one as your landing zones
984
01:13:20,580 --> 01:13:23,780
were built for in the first place, paved roads.
985
01:13:23,780 --> 01:13:27,080
Self-service but constrained, automation but enforced.
986
01:13:27,080 --> 01:13:28,580
Subscription vending that creates
987
01:13:28,580 --> 01:13:30,380
the right management group placement,
988
01:13:30,380 --> 01:13:32,580
the right R-back, the right policy inheritance,
989
01:13:32,580 --> 01:13:34,680
the right logging and the right network integration
990
01:13:34,680 --> 01:13:35,780
by default,
991
01:13:35,780 --> 01:13:37,280
and a rule that's uncomfortable,
992
01:13:37,280 --> 01:13:38,080
but necessary.
993
01:13:38,080 --> 01:13:40,780
If a team can't get what they need through the paved road,
994
01:13:40,780 --> 01:13:41,880
the road is wrong.
995
01:13:41,880 --> 01:13:44,080
Fix the road, don't hand them a machete.
996
01:13:44,080 --> 01:13:45,780
The CCOE shouldn't be a gatekeeper,
997
01:13:45,780 --> 01:13:47,180
it should be a platform product team,
998
01:13:47,180 --> 01:13:48,380
a team with a backlog,
999
01:13:48,380 --> 01:13:49,480
clear service boundaries,
1000
01:13:49,480 --> 01:13:51,480
and an explicit contract.
1001
01:13:51,480 --> 01:13:52,780
Here's what you get by default,
1002
01:13:52,780 --> 01:13:54,180
here's how your request changed,
1003
01:13:54,180 --> 01:13:55,780
here's how exceptions expire,
1004
01:13:55,780 --> 01:13:57,180
here's how we measure success.
1005
01:13:57,180 --> 01:13:58,980
Because the only scalable governance model
1006
01:13:58,980 --> 01:14:01,680
is the one where the safest path is also the easiest path.
1007
01:14:01,680 --> 01:14:04,880
Everything else is conditional chaos with a ticket number.
1008
01:14:04,880 --> 01:14:05,780
Closing synthesis,
1009
01:14:05,780 --> 01:14:07,380
the enterprise migration mindset.
1010
01:14:07,380 --> 01:14:09,280
Most organizations want a migration plan
1011
01:14:09,280 --> 01:14:11,380
on what they actually need is a migration mindset,
1012
01:14:11,380 --> 01:14:13,180
because migration is not a date you announce,
1013
01:14:13,180 --> 01:14:14,680
it's not a checklist you complete,
1014
01:14:14,680 --> 01:14:16,280
it's not a tooling decision you delegate.
1015
01:14:16,280 --> 01:14:18,480
Migration is an operating model shift,
1016
01:14:18,480 --> 01:14:21,980
it is a trust contract between central IT and delivery teams,
1017
01:14:21,980 --> 01:14:24,280
central teams stop doing ad hoc approvals
1018
01:14:24,280 --> 01:14:26,280
and instead enforce intent through design.
1019
01:14:26,280 --> 01:14:28,380
Delivery teams stop improvising infrastructure
1020
01:14:28,380 --> 01:14:29,980
and instead consume paved roads
1021
01:14:29,980 --> 01:14:31,380
that make safe work routine.
1022
01:14:31,380 --> 01:14:33,580
That distinction matters.
1023
01:14:33,580 --> 01:14:35,080
If leadership frames migration
1024
01:14:35,080 --> 01:14:37,180
as move workloads to Azure,
1025
01:14:37,180 --> 01:14:39,080
the program will measure activity,
1026
01:14:39,080 --> 01:14:40,280
how many apps moved,
1027
01:14:40,280 --> 01:14:41,580
how many servers shut down,
1028
01:14:41,580 --> 01:14:43,580
how many subscriptions exist,
1029
01:14:43,580 --> 01:14:45,180
and then the business will still be surprised,
1030
01:14:45,180 --> 01:14:46,680
surprised by audit findings,
1031
01:14:46,680 --> 01:14:47,980
surprised by incidents,
1032
01:14:47,980 --> 01:14:49,380
surprised by cost variance,
1033
01:14:49,380 --> 01:14:51,080
surprised by the fact that cloud
1034
01:14:51,080 --> 01:14:52,980
didn't magically create agility,
1035
01:14:52,980 --> 01:14:54,480
because cloud doesn't create agility,
1036
01:14:54,480 --> 01:14:56,380
govern systems create agility.
1037
01:14:56,380 --> 01:14:58,380
So the mindset is simple and brutal,
1038
01:14:58,380 --> 01:14:59,380
platform first,
1039
01:14:59,380 --> 01:15:01,680
then factory, then modernization,
1040
01:15:01,680 --> 01:15:03,080
you build the landings on contract,
1041
01:15:03,080 --> 01:15:05,180
so intent is enforceable at scale,
1042
01:15:05,180 --> 01:15:06,580
you build the migration factory,
1043
01:15:06,580 --> 01:15:08,680
so delivery is repeatable and validated,
1044
01:15:08,680 --> 01:15:09,580
not heroic.
1045
01:15:09,580 --> 01:15:10,980
And only then do you modernize,
1046
01:15:10,980 --> 01:15:11,980
selectively,
1047
01:15:11,980 --> 01:15:13,180
where the platform can carry it
1048
01:15:13,180 --> 01:15:14,480
and the business case is real.
1049
01:15:14,480 --> 01:15:14,980
Along the way,
1050
01:15:14,980 --> 01:15:17,980
you stop pretending every workload deserves investment.
1051
01:15:17,980 --> 01:15:19,380
Some deserve replatforming
1052
01:15:19,380 --> 01:15:21,080
because it buys operational leverage,
1053
01:15:21,080 --> 01:15:22,280
some deserve re-hosting
1054
01:15:22,280 --> 01:15:24,980
because stability matters more than ideology,
1055
01:15:24,980 --> 01:15:27,880
some deserve refactoring because they're differentiators,
1056
01:15:27,880 --> 01:15:29,280
and some deserve deletion,
1057
01:15:29,280 --> 01:15:31,380
because your migration capacity is finite
1058
01:15:31,380 --> 01:15:32,780
and entropy is expensive,
1059
01:15:32,780 --> 01:15:34,680
and you track outcomes like an adult,
1060
01:15:34,680 --> 01:15:35,680
time to market,
1061
01:15:35,680 --> 01:15:37,780
how long from idea to production,
1062
01:15:37,780 --> 01:15:38,780
risk instability,
1063
01:15:38,780 --> 01:15:41,580
do incidents decrease and does recovery get faster,
1064
01:15:41,580 --> 01:15:43,280
cost to serve predictability,
1065
01:15:43,280 --> 01:15:44,080
can you forecast,
1066
01:15:44,080 --> 01:15:45,180
spend by workload,
1067
01:15:45,180 --> 01:15:46,980
or does it surprise you?
1068
01:15:46,980 --> 01:15:48,280
Organizational throughput,
1069
01:15:48,280 --> 01:15:49,680
how many teams can move safely
1070
01:15:49,680 --> 01:15:51,480
in parallel without governance collapsing?
1071
01:15:51,480 --> 01:15:52,980
Cloud migrations are justified
1072
01:15:52,980 --> 01:15:53,580
by outcomes,
1073
01:15:53,580 --> 01:15:54,880
not architectures.
1074
01:15:54,880 --> 01:15:56,080
That's the executive framing
1075
01:15:56,080 --> 01:15:57,080
that survives,
1076
01:15:57,080 --> 01:15:58,080
budget reviews,
1077
01:15:58,080 --> 01:15:58,980
leadership churn,
1078
01:15:58,980 --> 01:15:59,980
and audit season.
1079
01:15:59,980 --> 01:16:01,080
And this is the final thought
1080
01:16:01,080 --> 01:16:02,680
that leaders need to internalize
1081
01:16:02,680 --> 01:16:04,680
before they sign another migration mandate.
1082
01:16:04,680 --> 01:16:06,180
The goal isn't to get to Azure.
1083
01:16:06,180 --> 01:16:07,580
The goal is to build an enterprise
1084
01:16:07,580 --> 01:16:09,880
that can change faster than its market.
1085
01:16:09,880 --> 01:16:12,280
Treat migration as enterprise transformation.
1086
01:16:12,280 --> 01:16:13,280
Platform first,
1087
01:16:13,280 --> 01:16:14,880
then a migration factory,
1088
01:16:14,880 --> 01:16:16,980
then modernization when maturity earns it.
1089
01:16:16,980 --> 01:16:18,480
If you want the next layer,
1090
01:16:18,480 --> 01:16:19,480
watch the episode on
1091
01:16:19,480 --> 01:16:20,980
landing zone governance drift
1092
01:16:20,980 --> 01:16:22,780
and policy entropy and subscribe,
1093
01:16:22,780 --> 01:16:23,880
because without enforcement,
1094
01:16:23,880 --> 01:16:25,280
your cloud strategy decays
1095
01:16:25,280 --> 01:16:26,980
into exceptions faster than you think.