This episode explores the Microsoft 365 maturity model through real-world insights gathered from auditing over 500 tenants. Instead of relying on theoretical frameworks, it uncovers how most organizations struggle with Microsoft 365 governance maturity, hidden misconfigurations, and the growing gap between perceived and actual security. You’ll learn why traditional approaches to M365 tenant audits often fail, and what patterns consistently separate mature environments from those at risk.

By breaking down a practical, experience-driven maturity formula, this episode shows how to improve Microsoft 365 governance, strengthen compliance, and scale operations effectively. It highlights the role of automation, operational discipline, and continuous assessment in achieving true Microsoft 365 maturity, making it essential listening for IT leaders, administrators, and consultants aiming to elevate their tenant security and governance strategy.

Apple Podcasts podcast player iconSpotify podcast player iconYoutube Music podcast player iconSpreaker podcast player iconPodchaser podcast player iconAmazon Music podcast player icon

Auditing 500 Microsoft 365 tenants shows real governance maturity depends on actions, not only rules. The Microsoft 365 Maturity Model gives a useful way to check and improve your group’s governance, security, and compliance. When you focus on daily practices, you learn how to lower risks and make management easier. This helps IT leaders, architects, and security experts who want to grow and keep strong governance.

Key Takeaways

  • Check your Microsoft 365 often to find ghost users and unused features. This can help you save up to 54% on license costs.
  • Use automation and set rules to make governance better. This cuts down on manual work and makes security stronger.
  • Keep checking your security settings and who can access what. This stops security problems and helps you follow the rules.
  • Use the Microsoft 365 Maturity Model to check your governance level. Try to reach a higher level to boost security, compliance, and how well things work.
  • Get your team to join training and awareness programs. This helps users spot threats and use good security habits.

9 Surprising Facts About the Microsoft 365 Maturity Model

  1. Not just IT-focused: The microsoft 365 maturity model deliberately emphasizes organizational change, governance, and adoption—business and people aspects are measured as much as technology capabilities.
  2. It’s non-linear: Progress through maturity levels isn’t strictly sequential; organizations often advance in some domains while remaining at lower levels in others, reflecting real-world complexity.
  3. Adoption is a core metric: User adoption and behavioral change are treated as measurable outcomes, not just optional activities, influencing maturity scoring.
  4. Security and compliance span maturity levels: Security controls are expected to evolve continually; basic protections appear at early levels while risk-based, automated controls characterize higher maturity.
  5. Automation multiplies value: Higher maturity tiers emphasize automation of workflows and governance, and organizations that automate routine processes see disproportionate productivity and compliance gains.
  6. Customization can hinder maturity: Heavy customizations of Microsoft 365 can slow maturity advancement by increasing management overhead and reducing ability to adopt platform improvements.
  7. Data-driven governance is essential: Mature Microsoft 365 implementations rely on telemetry and analytics to guide policy, adoption campaigns, and lifecycle decisions rather than relying solely on manual audits.
  8. Cost savings are often indirect: Financial benefits from the microsoft 365 maturity model frequently come from reduced support overhead, fewer third-party tools, and improved employee productivity rather than direct license reduction.
  9. Culture is the hardest part: Organizations commonly report that cultural change—leadership alignment, user trust, and change management—is the most difficult and decisive factor in achieving higher maturity levels.

Key Findings from 500+ Audits

Key Findings from 500 Audits

Maturity Trends

When you look at 500 Microsoft 365 tenant audits, you see some patterns. Many groups pay for things they do not use. Some features are not used much. The table below shows the biggest trends that change your costs:

Trend DescriptionPercentage Impact on Costs
Ghost users consume total licensing costs23%
Feature over-provisioning31%
Seasonal usage fluctuations18%

Ghost users are accounts that do not need access anymore. They can make your license costs go up by almost a quarter. Giving out too many features that people do not use wastes even more money. When people use Microsoft 365 more or less during different times, it also changes your spending. If you check your own m365 audit, you can use these patterns to help save money. A regular report lets you watch these trends and make smarter choices.

Common Misconfigurations

Lots of groups have the same setup problems in Microsoft 365. You might see these in your own m365 audit:

  • Only 45% of users use a configuration tool.
  • Microsoft saw 176,000 tampering events in May 2024.
  • 48% of people said there was little or no tampering.

You might also have these problems:

  • Doing setup and checks by hand takes a lot of time.
  • Settings can change slowly without anyone noticing.
  • It is hard to see what all users and permissions are doing.
  • Keeping everything lined up is tough.

These setup mistakes can make your system less safe and give your IT team more work. A good report can help you find these problems early and fix them before they get worse.

Impact on Governance

Mistakes in governance can make your group less safe. You might see these things in your report:

  1. Governance mistakes can show weak spots in your controls.
  2. These weak spots can cause big mistakes in your reports.
  3. Regulators may look more closely at your group.

About 30% of employee benefit plan audits have big problems. These problems can mean bad testing and weak paperwork. This can make you think you are safe when you are not. It can also make people ask more questions about how you watch over things. If you fix these governance problems, you help your group stay safe and trusted.

Tip: Check your audit results often and update your governance rules. This helps you stop risks and keep your group in line with the rules.

Understanding the M365 Maturity Model

The microsoft 365 maturity model was made by m365.fm. It helps you make your Microsoft 365 safer and follow rules better. This model does not just count how many policies or tools you have. It checks how your system works when it is tested. You can use this model to see how good your governance is and what you should do next.

Five Levels of Maturity

The microsoft 365 maturity model has five levels. Each level shows how your group handles rules and safety.

Maturity LevelCharacteristics and Requirements
Level 300Defined and standard processes, policy-driven, stable environment, good user competency, limited metrics validation.
Level 400Actively managed IT environment, documented processes, strong governance, high user competency, adaptable IT processes with defined metrics.
Level 500Focus on optimization and continuous improvement, fully competent users, systematic process improvement, and performance analysis.

You start with simple, manual steps. Later, you use data and automation to get better every day. At Level 300, people begin to care about following the rules. At Level 400, teams work together and use GRC in all choices. At Level 500, you always look for ways to improve.

From Reactive to Optimized Governance

The microsoft 365 maturity model helps you stop only fixing problems when they happen. It helps you build strong rules before problems start. At higher levels, you use automation and clear rules. This saves time and keeps your system safer.

BenefitDescription
AutomationSaves IT hours and ensures governance is a permanent shield, providing reliable evidence for audits.
Continuous ComplianceFacilitates ongoing adherence to policies, making protection easier to achieve.
Unified StrategyEnhances operational efficiency and risk management through effective policy enforcement.

You match your plans with your business goals. This lowers risk and helps you manage Microsoft 365 better.

Operationalizing Governance

You need to use governance every day, not just write rules. The microsoft 365 maturity model shows you how to make workspaces the same way, manage their life, and watch who gets access.

Governance PillarFocus Areas in Microsoft 365
Workspace Creation & OwnershipStandardize workspace creation with naming conventions and templates; assign clear owners for accountability.
Lifecycle Management & CleanupEstablish rules for archiving or deleting inactive workspaces to maintain data quality and compliance.
Monitoring & ReportingTrack data use and policy enforcement to ensure privacy and quality.
Access Controls & SharingUse access reviews and automated permissions tracking to manage sharing and sensitive content.

You make a system that works all the time, not just for audits. The microsoft 365 maturity model helps you build trust and control in your group.

Common Mistakes About the Microsoft 365 Maturity Model

  • Treating it as a checkbox: Believing moving a capability to the next level is just completing tasks instead of embedding sustainable practices and behaviors.
  • Focusing only on technology: Prioritizing tools and features while neglecting people, processes, governance, and adoption.
  • Assuming one-size-fits-all: Applying generic maturity targets without tailoring to business goals, industry, and organization size.
  • Equating maturity with license counts or feature usage: Measuring success by licenses purchased or feature activation rather than business outcomes and effective usage.
  • Neglecting change management and training: Underestimating the investment required to drive user adoption, role-based training, and behavior change.
  • Skipping governance and compliance: Treating governance as an afterthought instead of a foundational element that enables scale, security, and risk management.
  • No clear metrics or measurement: Failing to define KPIs, baselines, and regular assessments to track progress and value realization.
  • Underestimating integration and architecture needs: Ignoring identity, information architecture, and integration with existing systems, which creates silos and technical debt.
  • Weak executive sponsorship: Lacking visible leadership support and strategic alignment, which limits cross-functional collaboration and funding.
  • Siloed improvement efforts: Improving individual tools or teams in isolation rather than coordinating across functions for enterprise-wide outcomes.
  • Neglecting security and risk considerations: Advancing collaboration and sharing capabilities without commensurate controls, monitoring, and incident response.
  • No continuous improvement loop: Treating maturity as a one-time project instead of an ongoing practice of assessment, feedback, and refinement.
  • Failing to map maturity to business outcomes: Not connecting maturity levels to measurable business benefits like productivity, cost reduction, or customer experience.

Microsoft 365 Security Assessment Insights

Security Gaps Identified

When you do a microsoft 365 security assessment, you find many security problems. These problems can put your group in danger. Almost every group has at least one big compliance gap. Many groups have issues because things are set up wrong. Sometimes, admins do not turn on multi-factor authentication. This makes it easier for someone to take over accounts. Many groups also cannot see or control everything in their Microsoft 365. Each week, groups deal with over 140,000 failed logins. These facts show why you must care about security all the time.

Security Gap DescriptionPercentage/Count
Organizations with at least one critical compliance gap97%
Organizations experiencing a security or compliance incident due to misconfiguration45%
Administrators operating without multi-factor authentication87%
Organizations lacking full visibility and control over their Microsoft 365 environment45%
Average failed login attempts per week per organization140,443

Bar chart showing the most common security gaps found in 500 Microsoft 365 tenant audits.

You might find other security problems in your microsoft 365 security assessment:

  • Shadow IT and unauthorized tenants make it hard to follow rules.
  • Weak identity and access governance leaves unused accounts open.
  • Data can leak out through SharePoint and OneDrive.
  • Too many Teams can cause compliance problems.
  • Email security mistakes can lead to business email scams.

Role of Audit Logs

Audit logs are very important in your microsoft 365 security assessment. You use them to watch for strange or risky actions. If something bad happens, audit logs show what happened and who did it. They also show what data was touched. You need audit logs to follow the rules. They give proof for regulators and help you act fast when there is a threat.

  • Watching security is easier with good logs.
  • You can look into problems with clear records.
  • Audit logs help you follow rules by tracking what users and admins do.

Tip: Always look at your audit logs when you do a microsoft 365 security assessment. This helps you find problems before they get worse.

Reducing Risk

You can lower risk by using what you learn from your microsoft 365 security assessment. Work with your CSO, CISO, or CTO to make a strong security plan. Make sure you can see what is happening in real time. Run fake phishing tests and teach users how to spot threats. Check your security settings and user access every few months. These steps help you fix security problems and keep your group safe from new dangers.

  • Work with security leaders to make strong plans.
  • Use real-time checks to see what is going on.
  • Teach users with phishing tests.
  • Check security settings and access on a regular schedule.

A good microsoft 365 security assessment helps you find weak spots, fix security problems, and keep your data and users safe.

Opportunities for M365 Improvement

Feature Adoption

You can get more out of Microsoft 365 by using more features. Many groups do not use the advanced security and compliance tools. When you check your setup, you may see users skip things like conditional access by device and risk. Some people do not turn on session-level restrictions or privileged identity controls. Passwordless authentication and device compliance enforcement are also missed. Advanced anti-phishing policies are not always used. A configuration review helps you find these missing parts. You should tell your teams to use sensitivity labels, DLP policies, and unified audit logging. These steps make your security better and help you follow the rules.

  • Conditional access by device and risk
  • Passwordless authentication
  • Advanced anti-phishing policies
  • Sensitivity labels and DLP policies
  • Unified audit logging

Tip: Check your setup often so you do not miss important features.

License Optimization

You can save money and use fewer unused features by fixing your licenses. A microsoft 365 licensing assessment shows where you pay for licenses you do not need. Many groups keep licenses for ghost users or give out too many features. Checking your setup helps you match licenses to what you really use. You should remove unused accounts and pick the right subscriptions. This helps you use resources better and spend less money.

  • Look at license assignments after each check.
  • Take away licenses from ghost or inactive users.
  • Use setup checks to match features to your needs.

Enhancing Compliance

You make compliance stronger by following clear steps. Start with a check to find gaps. Use setup reviews to look at your rules and controls. You should teach all users with training programs. Microsoft Purview helps you watch permissions drift and sharing outside your group. Check enterprise apps and API permissions often. Set up DLP policies, retention rules, and sensitivity labels. Keep audit logs and use litigation hold if you need it for legal reasons. Watch for insider risk and check communication compliance to stop problems.

  1. Make account and authentication rules.
  2. Set up app permissions.
  3. Add data management and storage controls.
  4. Make email security better.
  5. Turn on auditing and logging rules.
  6. Manage mobile devices with the right controls.

Note: Doing these things after each check and review helps you build strong compliance.

Pros of Microsoft 365 Maturity Model

  • Provides a structured roadmap to assess and advance Microsoft 365 adoption across people, process, and technology.
  • Helps align IT investments with business objectives by defining clear maturity stages and outcomes.
  • Encourages best practices in governance, security, compliance, and information management tailored to Microsoft 365 capabilities.
  • Facilitates measurable progress with assessment criteria, enabling prioritization of initiatives and tracking over time.
  • Supports stakeholder communication by translating technical improvements into business value and risk reduction.
  • Promotes consistent adoption and change management practices, improving end-user experience and productivity.
  • Leverages Microsoft guidance and tools, making recommendations realistic and practical for Microsoft 365 environments.
  • Can improve security posture and regulatory compliance through staged controls and policy recommendations.

Cons of Microsoft 365 Maturity Model

  • May be perceived as Microsoft-centric and less applicable to organizations with heterogeneous cloud or on-premises mixes.
  • Implementation can be resource-intensive, requiring time, budget, and skilled personnel to move between maturity levels.
  • Risk of checkbox compliance—teams may focus on reaching maturity scores rather than on meaningful cultural or operational change.
  • Generic maturity stages might not account for unique organizational contexts, industry requirements, or legacy constraints.
  • Frequent Microsoft feature changes can make maintaining alignment with the model challenging and require continuous reassessment.
  • Smaller organizations may find the model overly complex or burdensome relative to their needs and capabilities.
  • Without executive sponsorship and cross-functional collaboration, improvements may stall despite clear guidance.
  • Overemphasis on tooling and policies can neglect user adoption, training, and human factors that drive real value.

Actionable Steps for Microsoft 365 Maturity

Assessing Your Environment

You need to know what is in your m365 environment before you can make it safer or check if you follow the rules. Start by doing a careful check of your system. Write down how you handle problems and make sure each step uses m365 features. Make a plan for telling people when something goes wrong. Practice your plans often and change them when new threats appear. Use audit data to see how safe you are and find weak spots. Plan a risk check to see if you meet the rules. Ask users for feedback and look at system data to see how things work. Find problems, choose what to fix first, and make small changes. This way, you keep your m365 system strong and ready for new rules.

Prioritizing Security and Compliance

Work on the most important things to keep your system safe and follow the rules. Check your people, how you do things, and your technology. Look at audit logs and watch for risks to know how safe you are. Protect important data with special tools. Watch for problems all the time. Have clear steps to follow if something bad happens. Use backups that you have tested to get things back to normal. Make sure you follow all the rules. Turn on mfa for all important accounts and check who can get in often. Do checks often to stay ahead of dangers and keep your rules up to date. Always match what you do with the rules and use m365 tools to help with checks and fixes.

Advancing to Higher Maturity Levels

To get better with the m365 maturity model, set your goals with help from important teams. Check your setup to find risks and weak spots. Give clear jobs to people for each service. Make rules for making new Teams or SharePoint sites. Decide when things can leave your group to follow the rules. Use names for workspaces that help with checks. Add sensitivity labels and DLP policies to protect data. Set up conditional access and turn on mfa for all admins. Look at default settings and make them stronger if needed. Plan training for users to help with new rules. Use automation to do tasks so you do not have to do them by hand. Watch how things are going with KPIs like how many people use the system each month. Do m365 checks often to keep getting better and follow the rules.

Tip: Have a monthly m365 check with important team members. This helps your team stay on track and keeps everyone working to improve.

Microsoft 365 Maturity Model Checklist

Use this checklist to assess and plan your Microsoft 365 maturity across key domains. Mark items complete as you progress through initiatives and controls.

Foundational: Identity & Access

Foundational: Device & Endpoint Management

Security & Compliance

Collaboration & Productivity

Adoption & Change Management

Operational Management & Support

Advanced: Optimization & Innovation

Governance & Risk Management

Assessment & Roadmap

Business Benefits of M365 Maturity

Business Benefits of M365 Maturity

Enhanced Security

When you make your microsoft 365 maturity higher, your security gets better. You keep your data and users safe from threats. You use smart tools in microsoft 365 to find risks early. You set up strong rules for who can get in and share things. You also follow more compliance rules. Your team can act faster when something goes wrong. The table below shows how being more mature in microsoft 365 helps you:

Improvement TypeDescription
Enhanced SecurityYou make your security stronger with better rules and checks.
Regulatory ComplianceYou follow more rules and pass audits more easily.
Operational EfficiencyYou use your resources better and do less work by hand.
Business ContinuityYou keep working even if something bad happens.
Competitive AdvantageYou stay ahead because your microsoft 365 is safer than others.

You also go from just knowing about microsoft 365 to using it really well. You get more out of your microsoft 365 Copilot. You use technology in a smart and careful way.

Cost Savings

You save money when your microsoft 365 maturity goes up. You stop paying for licenses you do not use. You remove ghost users and only give features people need. You do not get fined because you follow the rules. You spend less time fixing things. You use automation in microsoft 365 to do less work by hand. You also lower the chance of expensive security problems. When you manage licenses and features well, you really save money. You also avoid losing money from bad compliance and weak security.

Strategic Value for Microsoft Environments

With better microsoft 365 governance, you get more than just safety and savings. You build trust with your clients and partners. You become a trusted advisor, not just a seller. You keep your clients longer and make more money. You plan new projects with a strong base in microsoft 365. You can change fast when your business needs change. Your team can work on important things, not just daily tasks. You also get higher secure scores and have fewer problems. When you map your processes and learn about microsoft 365, you see good results. You make both your customers and workers happier.

Note: If your microsoft 365 maturity is low, you face risks. You might have too much data, security holes, and waste resources. You can avoid these problems by moving up the maturity model and using microsoft 365 for better control and compliance.


You can make your group better by using the M365 Maturity Model. Use what you learn from each tenant’s findings and security checks. Many groups, like a big real estate trust, worked together better after following maturity tips. They also got more out of Microsoft 365. You should check your tenant against standards like the CIS M365 foundations baseline and Essential Eight baseline. Doing a full m365 audit helps you see how you are doing and find what is missing. Look at tools like ShareGate’s assessment tool and Microsoft Security Documentation to help your tenant get better.

maturity model for microsoft 365

What is the Microsoft 365 maturity model?

The Microsoft 365 maturity model is a structured framework that defines stages of adoption, governance, security, and management across the organization to underpin real business activities. It helps teams move from ad hoc use of Microsoft 365 to well defined governance practices and improved process performance, aligning the microsoft 365 platform with organizational strategy and business competencies.

Where can I find official Microsoft content and practical scenarios for the model?

Official Microsoft content and practical scenarios are available on Microsoft Learn and related documentation. Additional resources, community providing support, and examples may also be found on GitHub where some implementations and templates related to the maturity model for Microsoft 365 are found on GitHub and maintained by the community and partners.

How does the model define business competencies and set of business competencies?

The model defines a set of business competencies such as information governance, security updates, content management, document management, and management processes. Each competency has maturity levels describing expected capabilities, roles, and measures to improve process performance and to ensure governance practices underpin business outcomes.

How do information governance and content governance fit into the maturity model?

Information governance and content governance are core competencies in the maturity model for Microsoft 365. They cover policies for content types, retention, classification of sensitive data, compliance controls, content lifecycle, and the use of Microsoft 365 tools to enforce consistent content management across the organization.

Can the model help with ensuring compliance and management of sensitive data?

Yes. The maturity model guides organizations to implement controls that ensure compliance, protect sensitive data, and standardize management processes. It recommends security updates, technical support structures, and information governance practices to reduce risks and demonstrate compliance across audits.

How do you use the model to improve business process and management processes?

Use the model by assessing current maturity, prioritizing business process improvements, defining roles and responsibilities, and implementing governance practices to manage processes. The model helps you align the use of Microsoft 365 to improve process performance and to underpin business processes across the organization.

What does “ad hoc” mean in the context of Microsoft 365 maturity and how do you move beyond it?

“Ad hoc” refers to informal, inconsistent, or manual usage of the platform with little governance. Moving beyond ad hoc involves defining a strategy, establishing business competencies, applying information governance, standardizing content types and workflows, and adopting measured implementation steps described in the maturity model.

How does the model address content management, content types, and document management?

The model outlines practices for content management including taxonomy, content types, metadata policies, versioning, retention, and document management lifecycle. These measures help organizations manage content consistently, improve findability, and ensure compliance with policies.

What technical components should be in place (security updates, technical support) when implementing the model?

Technical components include regular security updates, patch management, identity and access controls, monitoring, backup and recovery, and a technical support structure. These underpin the microsoft 365 platform and are necessary to maintain mature governance and reliable operations.

Are there recommended templates or implementations found on GitHub for the maturity model?

Yes. Many community providing support and partners publish templates, assessment tools, and implementation guidance found on GitHub. These repositories often complement official microsoft learn content and provide practical scenarios and scripts to accelerate adoption.

How do AI outputs and new capabilities affect the maturity model and governance?

AI outputs introduce new considerations for content governance, information accuracy, and compliance. The maturity model recommends updating governance practices to validate AI outputs, control their storage as content types, manage sensitive data exposure, and ensure responsible use of AI aligned with organizational strategy.

Who should be involved when defining the maturity model for Microsoft 365 in an organization?

Defining the model requires cross-functional involvement: IT, security, compliance, business process owners, content managers, and executive sponsors. This ensures the set of business competencies is relevant to real business activities and that governance practices are adopted across the organization.

How do I measure progress and outcomes after applying the model?

Measure progress with KPIs tied to business objectives such as reduced incidents, improved compliance posture, reduced time to find content, adoption metrics, and process performance improvements. Regular assessments against the maturity levels and reviews of management processes help track outcomes over time.

What additional resources should I consult to implement the model effectively?

Consult Microsoft Learn, official microsoft content, GitHub repositories, partner guides, community providing support forums, and case studies with practical scenarios. These additional resources provide templates, assessments, and best practices to help you use the model and improve governance and operations.

🚀 Want to be part of m365.fm?

Then stop just listening… and start showing up.

👉 Connect with me on LinkedIn and let’s make something happen:

  • 🎙️ Be a podcast guest and share your story
  • 🎧 Host your own episode (yes, seriously)
  • 💡 Pitch topics the community actually wants to hear
  • 🌍 Build your personal brand in the Microsoft 365 space

This isn’t just a podcast — it’s a platform for people who take action.

🔥 Most people wait. The best ones don’t.

👉 Connect with me on LinkedIn and send me a message:
"I want in"

Let’s build something awesome 👊

1
00:00:00,000 --> 00:00:05,440
Hello, my name is Mirko Peters and I translate how technology actually shapes business reality.

2
00:00:05,440 --> 00:00:10,400
Most organizations believe that Microsoft 365 GRC maturity is built on more policies, more

3
00:00:10,400 --> 00:00:12,480
controls and more administrative effort.

4
00:00:12,480 --> 00:00:16,680
They think maturity looks like endless meetings and mountains of proof that someone somewhere

5
00:00:16,680 --> 00:00:18,600
is taking governance seriously.

6
00:00:18,600 --> 00:00:22,680
But after looking across 500 different tenants, I realized that isn't the pattern at all.

7
00:00:22,680 --> 00:00:26,200
The real difference between a struggling environment and a high performing one wasn't how much

8
00:00:26,200 --> 00:00:30,800
governance existed on paper, but rather how predictably that environment behaved under normal

9
00:00:30,800 --> 00:00:32,000
daily pressure.

10
00:00:32,000 --> 00:00:36,640
You can take two companies with the same licenses and similar tools, yet they will produce

11
00:00:36,640 --> 00:00:38,440
completely different outcomes.

12
00:00:38,440 --> 00:00:42,640
One tenant might need five full weeks of manual labor to prepare for a standard audit, while

13
00:00:42,640 --> 00:00:46,360
another can pull the exact same evidence in just a few days.

14
00:00:46,360 --> 00:00:50,480
We see the same thing with AI, where one team is nervous about co-pilot because nobody

15
00:00:50,480 --> 00:00:55,160
trusts what the data estate might expose, while another moves faster because their data

16
00:00:55,160 --> 00:00:57,200
already has a reliable structure.

17
00:00:57,200 --> 00:01:01,080
So let me take one step back and explain what maturity actually measures from a system's

18
00:01:01,080 --> 00:01:02,080
perspective.

19
00:01:02,080 --> 00:01:03,880
And what maturity really measures.

20
00:01:03,880 --> 00:01:07,840
The first thing most people miss is that maturity has nothing to do with your feature count

21
00:01:07,840 --> 00:01:09,840
or how many policies you've written.

22
00:01:09,840 --> 00:01:13,800
It is definitely not measured by the amount of complex compliance, language and organization

23
00:01:13,800 --> 00:01:16,080
can produce during a steering committee meeting.

24
00:01:16,080 --> 00:01:20,600
From a system perspective, maturity is simply the ability to create consistent, measurable

25
00:01:20,600 --> 00:01:22,880
and repeatable outcomes every single day.

26
00:01:22,880 --> 00:01:26,840
This isn't about a one-time success during a special project or a frantic scramble when

27
00:01:26,840 --> 00:01:30,720
the auditors arrive, but about how the system functions when no one is watching.

28
00:01:30,720 --> 00:01:35,480
That distinction matters because many organizations confuse implementation with operationalization.

29
00:01:35,480 --> 00:01:38,760
They might tell me they have sensitivity labels set up, which is fine, but that's just

30
00:01:38,760 --> 00:01:39,960
the technical configuration.

31
00:01:39,960 --> 00:01:43,320
The real questions are whether those labels are used consistently across the board and

32
00:01:43,320 --> 00:01:45,600
if the data owners actually know where they apply.

33
00:01:45,600 --> 00:01:50,320
I want to know if you can measure your coverage and show whether those labels actually influence,

34
00:01:50,320 --> 00:01:53,240
varying access and downstream co-pilot behavior.

35
00:01:53,240 --> 00:01:57,640
Because if the answer to those questions is no, then the control might exist technically,

36
00:01:57,640 --> 00:01:59,320
but it does not exist operationally.

37
00:01:59,320 --> 00:02:02,440
This gap is exactly where maturity gets misread by leadership.

38
00:02:02,440 --> 00:02:07,640
I've seen tenants with premium licensing, documented policies and beautiful purview setups

39
00:02:07,640 --> 00:02:11,240
where the operating reality was still completely reactive.

40
00:02:11,240 --> 00:02:14,720
Audit evidence had to be chased down manually by stressed employees.

41
00:02:14,720 --> 00:02:19,640
Ownership was a total mystery and sensitive content set in broad access locations despite

42
00:02:19,640 --> 00:02:21,320
the mature labels.

43
00:02:21,320 --> 00:02:26,320
On paper, these organizations looked sophisticated, but in practice, they were incredibly fragile.

44
00:02:26,320 --> 00:02:28,560
That's why I define maturity much more simply.

45
00:02:28,560 --> 00:02:30,760
Maturity is predictable governance behavior.

46
00:02:30,760 --> 00:02:34,560
If you remember nothing else from this, remember that phrase, predictable governance behavior

47
00:02:34,560 --> 00:02:36,640
means the right control happens consistently.

48
00:02:36,640 --> 00:02:40,720
The right person is held accountable and the evidence exists exactly when it's needed.

49
00:02:40,720 --> 00:02:44,200
When a system is mature, the environment is designed to make good decisions easier

50
00:02:44,200 --> 00:02:45,920
for the user than bad ones.

51
00:02:45,920 --> 00:02:49,280
That is what leaders should actually care about when they look at their infrastructure.

52
00:02:49,280 --> 00:02:53,200
The goal isn't to check a box saying a capability was switched on, but to verify whether the

53
00:02:53,200 --> 00:02:58,000
tenant now produces a more stable business reality and why is that the right lens to use.

54
00:02:58,000 --> 00:03:00,600
It's because governance is not some abstract compliance exercise.

55
00:03:00,600 --> 00:03:03,200
It shows up in your operational friction and your audit timelines.

56
00:03:03,200 --> 00:03:06,360
This clicked for me when I stopped looking at tenants as checklists and started looking

57
00:03:06,360 --> 00:03:08,440
at them as behavior engines.

58
00:03:08,440 --> 00:03:12,960
The tenant always tells the truth regardless of what your policy deck or your roadmap says.

59
00:03:12,960 --> 00:03:17,720
The environment itself reveals your true level of maturity because it produces the outcomes

60
00:03:17,720 --> 00:03:18,720
you keep getting.

61
00:03:18,720 --> 00:03:22,520
Just preparing for an audit takes your team 4 to 6 weeks of overtime that isn't just an

62
00:03:22,520 --> 00:03:23,520
audit issue.

63
00:03:23,520 --> 00:03:24,520
It's a system outcome.

64
00:03:24,520 --> 00:03:28,360
If nobody can tell you who owns a critical workspace or a high-risk data set that isn't

65
00:03:28,360 --> 00:03:31,040
a communication failure, it's a system outcome.

66
00:03:31,040 --> 00:03:35,840
When co-pilot adoption stalls, because legal is nervous and security is skeptical, that isn't

67
00:03:35,840 --> 00:03:37,160
an AI issue first.

68
00:03:37,160 --> 00:03:41,320
It's a governance issue wearing an AI label and this is where the business implication

69
00:03:41,320 --> 00:03:43,200
becomes impossible to ignore.

70
00:03:43,200 --> 00:03:46,920
Low maturity creates a structural drag that leads to slower decisions, more interruptions

71
00:03:46,920 --> 00:03:50,440
and a dangerous dependence on individual heroics to keep things running.

72
00:03:50,440 --> 00:03:56,040
High maturity does the opposite by replacing fragile human dependency with operating logic.

73
00:03:56,040 --> 00:03:59,680
Once you see maturity through this lens, you stop asking how many controls you have and

74
00:03:59,680 --> 00:04:02,640
start asking what outcomes those controls reliably produce.

75
00:04:02,640 --> 00:04:06,080
You stop worrying about whether you published a policy and start asking if the environment

76
00:04:06,080 --> 00:04:07,960
actually changed its behavior.

77
00:04:07,960 --> 00:04:10,480
Mature governance isn't about looking like you're in control.

78
00:04:10,480 --> 00:04:15,760
It's about staying in control when scale, turnover or new technologies like co-pilot arrive.

79
00:04:15,760 --> 00:04:19,880
Most organizations fail here because they invest in the tools first and the operating model

80
00:04:19,880 --> 00:04:24,220
second, leaving them with pieces that exist but a system that doesn't hold a "the false

81
00:04:24,220 --> 00:04:26,240
signals leaders read as maturity".

82
00:04:26,240 --> 00:04:30,080
So let's talk about the signals leaders keep reading as proof of maturity even when the

83
00:04:30,080 --> 00:04:32,520
operating model underneath is still weak.

84
00:04:32,520 --> 00:04:36,440
The first one is written policy, a policy matters and I'm not dismissing that but a written

85
00:04:36,440 --> 00:04:38,320
policy is not an operating control.

86
00:04:38,320 --> 00:04:42,120
It is an instruction maybe a useful or even a necessary one, but if the environment does

87
00:04:42,120 --> 00:04:47,760
not reinforce it, measure it and make it executable, then the policy is just intent with formatting.

88
00:04:47,760 --> 00:04:51,840
I've seen organizations with beautifully written governance standards and almost no reliable

89
00:04:51,840 --> 00:04:55,880
way to tell whether those standards changed daily behavior.

90
00:04:55,880 --> 00:04:59,220
Sharing was still broad, labels were still skipped and exceptions were still handled in

91
00:04:59,220 --> 00:05:03,600
email which meant that while the document existed, the control did not actually scale.

92
00:05:03,600 --> 00:05:05,680
The second false signal is premium licensing.

93
00:05:05,680 --> 00:05:10,760
This one is common in Microsoft 365 and organization buys E5 turns on parts of purview

94
00:05:10,760 --> 00:05:14,720
and maybe starts looking at compliance manager but leadership then assumes that having

95
00:05:14,720 --> 00:05:17,720
the capability is the same thing as having maturity.

96
00:05:17,720 --> 00:05:21,120
It isn't capability is just what is available to you.

97
00:05:21,120 --> 00:05:24,280
Maturity is whether that capability has become a repeatable business behavior.

98
00:05:24,280 --> 00:05:29,040
You can have advanced tooling and still operate like a level 110 if ownership is vague, configurations

99
00:05:29,040 --> 00:05:33,160
drift and evidence depends on a few overworked people who are the only ones who know where

100
00:05:33,160 --> 00:05:35,160
everything lives.

101
00:05:35,160 --> 00:05:39,120
From a system perspective, buying control is not the same as operationalizing control.

102
00:05:39,120 --> 00:05:41,600
The third false signal is training completion.

103
00:05:41,600 --> 00:05:44,800
This is where a lot of governance programs start compensating for weak design.

104
00:05:44,800 --> 00:05:48,960
They say they trained the users but we have to ask what actually changed after that training

105
00:05:48,960 --> 00:05:49,960
was over.

106
00:05:49,960 --> 00:05:54,560
Did risky sharing decrease, did label coverage improve or did the system finally make safer

107
00:05:54,560 --> 00:05:57,360
behavior easier for the average person to follow?

108
00:05:57,360 --> 00:06:00,720
Because if not, what you measured was attendance, not governance maturity.

109
00:06:00,720 --> 00:06:04,680
This matters because leaders often confuse awareness with behavior change but behavior usually

110
00:06:04,680 --> 00:06:06,160
follows the environment.

111
00:06:06,160 --> 00:06:10,280
If the fastest path is still the risky path, people will keep taking it especially when they

112
00:06:10,280 --> 00:06:12,160
are under pressure to deliver results.

113
00:06:12,160 --> 00:06:14,280
The fourth false signal is dashboard presence.

114
00:06:14,280 --> 00:06:19,080
A dashboard can be useful and I like dashboards but a dashboard is not a decision system just

115
00:06:19,080 --> 00:06:20,320
because it exists.

116
00:06:20,320 --> 00:06:24,960
A lot of organizations have reporting, yet far fewer have trusted metrics connected to action

117
00:06:24,960 --> 00:06:27,640
and that gap is much bigger than people think.

118
00:06:27,640 --> 00:06:31,160
If the dashboard shows activity but not control effectiveness or if nobody can explain

119
00:06:31,160 --> 00:06:33,720
the thresholds, then the dashboard is just decoration.

120
00:06:33,720 --> 00:06:37,520
It creates the feeling of oversight without the reality of intervention and that's dangerous

121
00:06:37,520 --> 00:06:42,240
because executives relax around visible reporting even when the environment itself is drifting

122
00:06:42,240 --> 00:06:43,880
into risk.

123
00:06:43,880 --> 00:06:46,320
The fifth false signal is control catalog size.

124
00:06:46,320 --> 00:06:50,280
This shows up in policy libraries, risk registers and exception forms that look impressive

125
00:06:50,280 --> 00:06:52,960
because of the sheer volume of rules and categories.

126
00:06:52,960 --> 00:06:56,240
It also often creates a massive interpretation burden.

127
00:06:56,240 --> 00:07:00,200
The people inside the system now need to translate policy into action every single time

128
00:07:00,200 --> 00:07:03,240
they work and once that burden gets too high.

129
00:07:03,240 --> 00:07:05,240
All workarounds always appear.

130
00:07:05,240 --> 00:07:09,560
Teams simplify the rules for themselves, admins apply judgment inconsistently and business

131
00:07:09,560 --> 00:07:13,360
owners stop trusting the process because it feels too complex to follow at speed.

132
00:07:13,360 --> 00:07:17,480
So control volume goes up but control reliability goes down, that's the paradox.

133
00:07:17,480 --> 00:07:20,880
The thing most people miss is that all of these are comfort signals.

134
00:07:20,880 --> 00:07:25,520
Policies feel serious, licenses feel advanced and large control estates feel comprehensive

135
00:07:25,520 --> 00:07:27,960
but comfort is not the same as performance.

136
00:07:27,960 --> 00:07:31,480
None of those signals on their own tell you whether governance works under pressure and

137
00:07:31,480 --> 00:07:32,480
that's the real test.

138
00:07:32,480 --> 00:07:34,320
Can the tenant handle turnover?

139
00:07:34,320 --> 00:07:35,920
Can it handle audit scrutiny?

140
00:07:35,920 --> 00:07:40,800
Or can it handle a co-pilot rollout without creating more manual coordination and hidden exposure?

141
00:07:40,800 --> 00:07:44,880
Because if the answer is no then what leadership is reading as maturity is really just structural

142
00:07:44,880 --> 00:07:45,880
compensation.

143
00:07:45,880 --> 00:07:49,840
The organization is surrounding a fragile core with documents and meetings that create reassurance

144
00:07:49,840 --> 00:07:51,720
without changing the underlying behavior.

145
00:07:51,720 --> 00:07:53,720
That's why I'm careful with maturity conversations.

146
00:07:53,720 --> 00:07:56,160
I'm not asking whether governance exists somewhere.

147
00:07:56,160 --> 00:07:59,880
I'm asking whether the environment now behaves in a controlled way by default.

148
00:07:59,880 --> 00:08:04,560
And once you start looking through that lens, low maturity has a very specific shape.

149
00:08:04,560 --> 00:08:05,560
Level 100?

150
00:08:05,560 --> 00:08:07,760
Reactive governance as a system outcome.

151
00:08:07,760 --> 00:08:11,480
So let's start at the bottom of the model because this is where a lot of organizations still

152
00:08:11,480 --> 00:08:14,560
operate even when they don't describe themselves that way.

153
00:08:14,560 --> 00:08:16,200
Level 100 is reactive governance.

154
00:08:16,200 --> 00:08:20,760
It isn't evil or even necessarily careless but it is usually fragmented, overstretched and

155
00:08:20,760 --> 00:08:23,680
dependent on people compensating for missing structure.

156
00:08:23,680 --> 00:08:27,880
From a system perspective level 100 means the environment does not produce control reliably

157
00:08:27,880 --> 00:08:31,240
on its own and control only appears when pressure arrives.

158
00:08:31,240 --> 00:08:35,240
An incident happens in order to ask a question or a leader gets nervous about co-pilot and

159
00:08:35,240 --> 00:08:37,840
suddenly the organization goes into governance mode.

160
00:08:37,840 --> 00:08:38,840
That is the pattern.

161
00:08:38,840 --> 00:08:39,840
It isn't steady control.

162
00:08:39,840 --> 00:08:41,160
It is interruption driven control.

163
00:08:41,160 --> 00:08:44,640
You'll see broad access that nobody has reviewed in a long time.

164
00:08:44,640 --> 00:08:49,360
And while sensitive data exists, nobody can tell you where it is with any confidence.

165
00:08:49,360 --> 00:08:52,760
Sharing settings may technically exist but operationally they're not being monitored

166
00:08:52,760 --> 00:08:56,720
in a way that changes behavior which means risk is not contained by architecture.

167
00:08:56,720 --> 00:08:58,720
It's contained badly by hope.

168
00:08:58,720 --> 00:09:00,400
Ownership at this level is usually vague.

169
00:09:00,400 --> 00:09:05,000
Ask who owns a high-risk workspace and you'll hear that IT set it up while the business

170
00:09:05,000 --> 00:09:08,880
uses it and security advises which really means no one actually owns the outcome.

171
00:09:08,880 --> 00:09:09,920
And why is that a problem?

172
00:09:09,920 --> 00:09:13,640
Because unclear ownership creates a single point of failure without a name.

173
00:09:13,640 --> 00:09:18,000
Everyone assumes someone else is holding the control but the reality is that nobody is.

174
00:09:18,000 --> 00:09:20,760
That is why level 110 and rely so heavily on heroics.

175
00:09:20,760 --> 00:09:23,760
There's always a person or usually a few people like the admin who knows where the

176
00:09:23,760 --> 00:09:28,560
audit logs are or the sharepoint person who can explain why a site has unique permissions.

177
00:09:28,560 --> 00:09:31,960
From the outside that can look functional but inside it's fragile.

178
00:09:31,960 --> 00:09:36,680
Because if maturity depends on memory and personal trust then the system has no redundancy.

179
00:09:36,680 --> 00:09:42,000
It has individuals acting as structural compensation and that breaks fast under turnover, growth

180
00:09:42,000 --> 00:09:43,000
or pressure.

181
00:09:43,000 --> 00:09:46,080
Audit behavior at level 100 is one of the clearest signals.

182
00:09:46,080 --> 00:09:50,440
Nothing is really audit ready so evidence is reconstructed after the request arrives and

183
00:09:50,440 --> 00:09:54,440
people end up chasing screenshots or manually reviewing permissions.

184
00:09:54,440 --> 00:09:56,600
The organization is not proving control.

185
00:09:56,600 --> 00:09:59,880
It is trying to rebuild a story that sounds controlled.

186
00:09:59,880 --> 00:10:02,880
That's why audit preparation takes so long in immature environments.

187
00:10:02,880 --> 00:10:06,400
It isn't because auditors are difficult but because the operating model never created

188
00:10:06,400 --> 00:10:08,480
a clean evidence trail in the first place.

189
00:10:08,480 --> 00:10:10,240
Now map that to daily work.

190
00:10:10,240 --> 00:10:14,160
Exceptions happen manually, reviews depend on calendar reminders and spreadsheets become

191
00:10:14,160 --> 00:10:16,000
shadow-governance systems.

192
00:10:16,000 --> 00:10:20,040
Because the formal model is weak, local teams start inventing their own ways to manage

193
00:10:20,040 --> 00:10:24,080
risk so one department does it carefully while another ignores it entirely.

194
00:10:24,080 --> 00:10:28,280
The tenant becomes uneven, it isn't just unsecured, it is inconsistent and uneven environments

195
00:10:28,280 --> 00:10:29,600
are hard to trust.

196
00:10:29,600 --> 00:10:31,920
Which brings me to co-pilot and AI at this level.

197
00:10:31,920 --> 00:10:35,720
Most leaders think AI risk starts with the AI tool but usually it doesn't.

198
00:10:35,720 --> 00:10:39,560
At level 100, co-pilot simply reveals the disorder that already exists.

199
00:10:39,560 --> 00:10:40,760
Permissions are broad.

200
00:10:40,760 --> 00:10:44,000
Sensitive content is poorly identified and ownership is unclear.

201
00:10:44,000 --> 00:10:46,880
So security gets nervous and legal slow things down.

202
00:10:46,880 --> 00:10:50,400
Directors don't trust the outputs because the data is state itself isn't trustworthy.

203
00:10:50,400 --> 00:10:55,040
That's why local pilot trust is often just low governance maturity becoming visible.

204
00:10:55,040 --> 00:10:58,000
The AI layer didn't create the problem it exposed it.

205
00:10:58,000 --> 00:11:00,000
And the business reality here is expensive.

206
00:11:00,000 --> 00:11:02,600
Decisions slow down because confidence is low.

207
00:11:02,600 --> 00:11:06,440
Incidents cost more because context is scattered and every governance question turns into

208
00:11:06,440 --> 00:11:07,440
a mini-project.

209
00:11:07,440 --> 00:11:10,880
If you're operating at level 100, the issue is not that you need more meetings or more

210
00:11:10,880 --> 00:11:11,960
policy language.

211
00:11:11,960 --> 00:11:13,360
The issue is simpler.

212
00:11:13,360 --> 00:11:18,120
Your environment is producing reactive behavior because that is exactly what it was set up

213
00:11:18,120 --> 00:11:19,120
to do.

214
00:11:19,120 --> 00:11:24,360
And once teams feel that pain, they usually move into something that looks better.

215
00:11:24,360 --> 00:11:26,720
Level 200 managed but fragile.

216
00:11:26,720 --> 00:11:31,200
Level 200 is the stage where organizations finally start to feel a sense of relief because

217
00:11:31,200 --> 00:11:33,400
they've moved past the initial chaos.

218
00:11:33,400 --> 00:11:37,640
To be fair things actually have improved significantly since the early days of improvisation.

219
00:11:37,640 --> 00:11:41,800
You'll see formal policies appearing and specific roles being defined while someone

220
00:11:41,800 --> 00:11:46,280
finally starts the work of mapping out risks and approvals across teams, share point and

221
00:11:46,280 --> 00:11:47,280
one drive.

222
00:11:47,280 --> 00:11:50,800
The environment no longer feels like a collection of random accidents and that shift matters

223
00:11:50,800 --> 00:11:52,120
deeply for the business.

224
00:11:52,120 --> 00:11:55,600
This is the first point where governance stops being an emergency response and starts

225
00:11:55,600 --> 00:11:57,880
looking like an actual professional discipline.

226
00:11:57,880 --> 00:12:01,080
But from a system perspective, level 200 is far from stable.

227
00:12:01,080 --> 00:12:02,920
It is managed certainly.

228
00:12:02,920 --> 00:12:06,880
But it remains incredibly fragile because the controls aren't yet consistent enough to

229
00:12:06,880 --> 00:12:09,440
survive pressure without constant human intervention.

230
00:12:09,440 --> 00:12:13,040
You can see this fragility everywhere you look in the daily operations.

231
00:12:13,040 --> 00:12:17,060
A review cycle might technically exist but it only actually happens if a specific person

232
00:12:17,060 --> 00:12:19,360
remembers to run the manual process.

233
00:12:19,360 --> 00:12:23,280
An ownership model is written down but the details are usually too fuzzy to make anyone

234
00:12:23,280 --> 00:12:25,320
truly accountable when things go wrong.

235
00:12:25,320 --> 00:12:29,440
What you're looking at is a mixed control model that relies on a messy combination of technology

236
00:12:29,440 --> 00:12:30,880
and tribal memory.

237
00:12:30,880 --> 00:12:34,480
Part of the system lives in a policy document, part of it is a setting in purview and a huge

238
00:12:34,480 --> 00:12:37,880
chunk of it is just a spreadsheet or a recurring meeting.

239
00:12:37,880 --> 00:12:41,840
This specific combination is extremely common in the corporate world and it's exactly why

240
00:12:41,840 --> 00:12:46,080
a level 200 creates such a dangerous executive illusion.

241
00:12:46,080 --> 00:12:49,480
Leadership looks at the new charts and says they finally have governance and while they

242
00:12:49,480 --> 00:12:53,000
aren't totally wrong, they don't yet have governance that scales.

243
00:12:53,000 --> 00:12:56,480
That distinction matters more than most people realize because level 200 looks fantastic

244
00:12:56,480 --> 00:12:58,040
during a slight presentation.

245
00:12:58,040 --> 00:13:02,480
You have names assigned to roles, process diagrams drawn out and a governance committee

246
00:13:02,480 --> 00:13:05,040
that meets once a month to discuss the backlog.

247
00:13:05,040 --> 00:13:08,720
However, the moment you test this operating model under real world pressure, the structural

248
00:13:08,720 --> 00:13:10,960
weaknesses become impossible to ignore.

249
00:13:10,960 --> 00:13:14,720
When a key person leaves the company or an urgent audit request lands on the desk, the

250
00:13:14,720 --> 00:13:16,440
hidden dependencies start to break.

251
00:13:16,440 --> 00:13:20,960
The spreadsheet needs a manual update that nobody has time for and the logic for handling

252
00:13:20,960 --> 00:13:23,320
exceptions turns out to be poorly documented.

253
00:13:23,320 --> 00:13:28,280
Suddenly, the business owner thinks IT is responsible for the data, while IT assumes compliance

254
00:13:28,280 --> 00:13:32,080
has it handled and security just figured the control was already being measured.

255
00:13:32,080 --> 00:13:35,960
Once that confusion sets in, the system starts compensating by throwing more people at the

256
00:13:35,960 --> 00:13:36,960
problem.

257
00:13:36,960 --> 00:13:41,160
This is the core weakness of level 200 because it depends entirely on coordination rather

258
00:13:41,160 --> 00:13:42,160
than architecture.

259
00:13:42,160 --> 00:13:45,720
A coordination can work for a while in smaller environments or within teams that have a few

260
00:13:45,720 --> 00:13:47,160
highly committed individuals.

261
00:13:47,160 --> 00:13:51,720
But the reality is that coordination is expensive because it requires constant reminders,

262
00:13:51,720 --> 00:13:55,280
follow ups and escalations just to keep the lights on.

263
00:13:55,280 --> 00:13:59,120
Because this level depends so heavily on human energy and discipline, the whole structure

264
00:13:59,120 --> 00:14:01,760
starts to degrade the moment the organization gets busy.

265
00:14:01,760 --> 00:14:04,280
That's why I call this stage managed but fragile.

266
00:14:04,280 --> 00:14:06,680
As the structure exists but the actual resilience does not.

267
00:14:06,680 --> 00:14:10,880
At this point you'll start to see small pockets of maturity appearing in specific departments

268
00:14:10,880 --> 00:14:12,280
like legal or HR.

269
00:14:12,280 --> 00:14:17,360
Maybe the legal team runs their sharepoint sites tightly or HR has developed a strong discipline

270
00:14:17,360 --> 00:14:19,080
around sensitivity labeling.

271
00:14:19,080 --> 00:14:22,560
These pockets are important because they prove the organization is capable of doing better

272
00:14:22,560 --> 00:14:24,840
but they also create a false sense of security.

273
00:14:24,840 --> 00:14:28,760
We just see these few successful areas and assume the entire tenant is maturing when

274
00:14:28,760 --> 00:14:32,400
they're actually just looking at local excellence inside and even estate.

275
00:14:32,400 --> 00:14:37,120
Uneven governance is a massive risk because it makes the environment feel much safer than

276
00:14:37,120 --> 00:14:38,120
it actually is.

277
00:14:38,120 --> 00:14:42,400
In a typical level 200 Microsoft 365 operation you'll find policies that are defined but

278
00:14:42,400 --> 00:14:47,360
not enforced and access reviews that only run in certain corners of the network.

279
00:14:47,360 --> 00:14:51,480
Evidence trails are often half available and half reconstructed from memory while critical

280
00:14:51,480 --> 00:14:55,400
controls still rely on email threads and professional goodwill.

281
00:14:55,400 --> 00:15:00,360
This is a classic case of structural compensation where the organization adds layers of coordination

282
00:15:00,360 --> 00:15:01,360
to hide the gaps.

283
00:15:01,360 --> 00:15:05,360
This setup works reasonably well until the complexity rises and then the old patterns of

284
00:15:05,360 --> 00:15:07,280
confusion and exposure return.

285
00:15:07,280 --> 00:15:11,040
Co-pilot makes this level especially interesting because organizations at this stage usually

286
00:15:11,040 --> 00:15:14,480
want AI benefits before they have AI ready controls.

287
00:15:14,480 --> 00:15:18,120
As soon as the rollout starts difficult questions begin to surface about permission trust

288
00:15:18,120 --> 00:15:19,520
and content labeling.

289
00:15:19,520 --> 00:15:24,120
At that point the AI conversation turns into a brutal stress test for your governance

290
00:15:24,120 --> 00:15:26,320
and the results are usually a mixed bag.

291
00:15:26,320 --> 00:15:30,600
That signature of being managed but fragile is exactly what separates level 200 from true

292
00:15:30,600 --> 00:15:32,080
maturity.

293
00:15:32,080 --> 00:15:34,280
Level 300 defined but uneven.

294
00:15:34,280 --> 00:15:38,800
Level 300 is the stage where governance finally starts to look like a real functioning system

295
00:15:38,800 --> 00:15:40,840
rather than just a series of manual efforts.

296
00:15:40,840 --> 00:15:45,080
This is the point where an organization usually adopts a recognizable framework with clearer

297
00:15:45,080 --> 00:15:47,080
terms and documented processes.

298
00:15:47,080 --> 00:15:51,720
If you were to ask five different people how a sensitivity label works or who approves

299
00:15:51,720 --> 00:15:54,840
an exception you'd finally start hearing the same answer.

300
00:15:54,840 --> 00:15:58,520
That is massive progress because it means the organization can finally repeat good outcomes

301
00:15:58,520 --> 00:16:00,040
on purpose rather than by accident.

302
00:16:00,040 --> 00:16:03,200
I've seen this click for a lot of different companies over the years.

303
00:16:03,200 --> 00:16:07,960
At level 200 you can still feel the environment wobble under its own weight but at level 300

304
00:16:07,960 --> 00:16:09,600
the structure starts to hold its shape.

305
00:16:09,600 --> 00:16:13,080
There is usually a much stronger language model across the business now and I'm not talking

306
00:16:13,080 --> 00:16:14,080
about an AI model.

307
00:16:14,080 --> 00:16:17,880
I'm talking about a governance language where people mean the same thing when they discuss

308
00:16:17,880 --> 00:16:20,720
ownership, classification and life cycle.

309
00:16:20,720 --> 00:16:25,760
This shared vocabulary reduces friction because you no longer have to argue about basic definitions.

310
00:16:25,760 --> 00:16:29,600
The process has also become much more visible to everyone involved in the system.

311
00:16:29,600 --> 00:16:33,320
You can actually trace how a new team gets created, how a share point site is reviewed

312
00:16:33,320 --> 00:16:36,000
and how audit evidence is supposed to be stored.

313
00:16:36,000 --> 00:16:41,040
The system becomes legible and that legibility is a genuine maturity gain for any business.

314
00:16:41,040 --> 00:16:45,040
Once the control logic is visible to the people using it you can finally stop guessing

315
00:16:45,040 --> 00:16:47,520
and start making real improvements to the workflow.

316
00:16:47,520 --> 00:16:51,800
However, this is also where most people get the maturity model wrong because defined does

317
00:16:51,800 --> 00:16:53,880
not mean uniform.

318
00:16:53,880 --> 00:16:58,040
Level 300 is still incredibly uneven and the tenant doesn't yet behave consistently

319
00:16:58,040 --> 00:17:00,960
enough across different workloads to be considered fully mature.

320
00:17:00,960 --> 00:17:05,360
This unevenness usually shows up in three specific ways that hold the organization back.

321
00:17:05,360 --> 00:17:09,240
First success remains local rather than systemic meaning team's governance might be great

322
00:17:09,240 --> 00:17:12,000
while one drive remains a complete mystery.

323
00:17:12,000 --> 00:17:16,360
Second, metrics finally start to appear but the business doesn't always trust the data

324
00:17:16,360 --> 00:17:17,360
they're seeing.

325
00:17:17,360 --> 00:17:21,080
The organizations at this level start measuring label coverage and remediation timelines

326
00:17:21,080 --> 00:17:24,600
but people still argue about the scope and accuracy of those numbers.

327
00:17:24,600 --> 00:17:28,360
Because the reporting isn't yet embedded in the daily decision making process, confidence

328
00:17:28,360 --> 00:17:29,960
in the data is still forming.

329
00:17:29,960 --> 00:17:34,320
Third, governance tends to stay trapped in silos where security, compliance and collaboration

330
00:17:34,320 --> 00:17:37,560
teams all have different views of the same reality.

331
00:17:37,560 --> 00:17:41,360
This is why level 300 often becomes a plateau for many companies.

332
00:17:41,360 --> 00:17:45,080
From the outside it looks mature enough because there are documents, metrics and a common

333
00:17:45,080 --> 00:17:47,040
process language in place.

334
00:17:47,040 --> 00:17:50,920
Since things are no longer obviously reactive or chaotic, leadership often assumes the

335
00:17:50,920 --> 00:17:52,360
hard work is finished.

336
00:17:52,360 --> 00:17:57,520
But structurally, the system still depends on inconsistent automation and uneven adoption

337
00:17:57,520 --> 00:17:59,280
across the different business units.

338
00:17:59,280 --> 00:18:03,640
That lack of consistency matters most when the pressure of a major audit or a rapid AI

339
00:18:03,640 --> 00:18:05,000
rollout hits the system.

340
00:18:05,000 --> 00:18:09,360
Under normal conditions, level 300 functions reasonably well but the week seems become visible

341
00:18:09,360 --> 00:18:10,880
the moment you try to scale.

342
00:18:10,880 --> 00:18:15,360
A control might work perfectly in one department but fail completely in another and life cycle

343
00:18:15,360 --> 00:18:18,200
decisions still stall even though an owner has been named.

344
00:18:18,200 --> 00:18:22,880
The environment is defined but it isn't yet reliably executable across the entire enterprise.

345
00:18:22,880 --> 00:18:26,440
When you map this to co-pilot readiness you see that level 300 organizations improve

346
00:18:26,440 --> 00:18:28,520
much faster than those at lower levels.

347
00:18:28,520 --> 00:18:33,040
There is more structure in the data estate and better consistency in permissions so the

348
00:18:33,040 --> 00:18:35,240
level of trust in the system starts to rise.

349
00:18:35,240 --> 00:18:39,840
But because the underlying estate is still uneven, the quality of the AI output remains hit

350
00:18:39,840 --> 00:18:40,840
or miss.

351
00:18:40,840 --> 00:18:43,800
Co-pilot can only ground itself in the environment you provide.

352
00:18:43,800 --> 00:18:48,200
So if one part of the tenant is messy or poorly owned, the user experience will suffer.

353
00:18:48,200 --> 00:18:52,400
This creates a strange reality where leaders see enough success to believe in AI but users

354
00:18:52,400 --> 00:18:54,840
see enough failure to remain cautious.

355
00:18:54,840 --> 00:18:58,480
Adoption starts to move forward but the confidence in the system doesn't fully compound

356
00:18:58,480 --> 00:19:01,120
because of those lingering inconsistencies.

357
00:19:01,120 --> 00:19:05,240
Level 300 is a vital step in the journey but it remains an incomplete solution for a

358
00:19:05,240 --> 00:19:06,240
modern business.

359
00:19:06,240 --> 00:19:09,640
It gives you the framework and the shared model you need to move forward but it doesn't

360
00:19:09,640 --> 00:19:11,680
yet make governance behave predictably.

361
00:19:11,680 --> 00:19:15,360
The real shift only happens when governance becomes a measurable reality that the entire

362
00:19:15,360 --> 00:19:17,000
business can actually trust.

363
00:19:17,000 --> 00:19:19,360
Level 400, predictable governance.

364
00:19:19,360 --> 00:19:23,320
Level 400 is the point where your governance model stops being a descriptive map and starts

365
00:19:23,320 --> 00:19:24,920
becoming an operational engine.

366
00:19:24,920 --> 00:19:29,000
This marks the critical shift from defined governance to predictable governance.

367
00:19:29,000 --> 00:19:30,920
And that distinction is everything.

368
00:19:30,920 --> 00:19:34,640
Predictibility means your environment behaves in ways you can actually trust before the

369
00:19:34,640 --> 00:19:37,080
pressure of an audit or breach arrives.

370
00:19:37,080 --> 00:19:41,080
At this stage, the tenant no longer relies on a few heroic individuals holding the system

371
00:19:41,080 --> 00:19:45,080
together through share willpower because the control model is now woven into the fabric

372
00:19:45,080 --> 00:19:46,560
of how work gets done.

373
00:19:46,560 --> 00:19:49,480
This is the real threshold for mature organization.

374
00:19:49,480 --> 00:19:52,800
Once your governance becomes predictable, it stops feeling like annoying overhead and

375
00:19:52,800 --> 00:19:55,400
starts functioning like essential infrastructure.

376
00:19:55,400 --> 00:19:59,800
Ownership at this level is finally executable rather than just being documented in a forgotten

377
00:19:59,800 --> 00:20:00,920
policy folder.

378
00:20:00,920 --> 00:20:05,200
Every workspace, data set, exception and review has a designated owner but the name on the

379
00:20:05,200 --> 00:20:07,040
spreadsheet isn't the important part.

380
00:20:07,040 --> 00:20:10,520
What matters is that your operating model knows exactly what to do with that ownership

381
00:20:10,520 --> 00:20:13,240
so reviews trigger automatically.

382
00:20:13,240 --> 00:20:18,120
Escalations happen without manual intervention and every decision leaves a clear trail of evidence.

383
00:20:18,120 --> 00:20:21,600
Accountability is no longer just a social expectation between colleagues.

384
00:20:21,600 --> 00:20:24,040
It becomes a structural reality of the system itself.

385
00:20:24,040 --> 00:20:28,400
This is a massive change because most governance failures happen in the dark gap between naming

386
00:20:28,400 --> 00:20:31,600
a responsible person and actually giving them the tools to be responsible.

387
00:20:31,600 --> 00:20:34,240
Level 400 closes that gap permanently.

388
00:20:34,240 --> 00:20:38,240
You also start to see automation take over the repetitive tasks that used to depend on

389
00:20:38,240 --> 00:20:42,680
human memory or professional goodwill, which is where most leadership teams finally feel

390
00:20:42,680 --> 00:20:44,320
a sense of relief.

391
00:20:44,320 --> 00:20:48,720
The environment now supports the people inside it instead of constantly forcing them to compensate

392
00:20:48,720 --> 00:20:50,480
for weak system design.

393
00:20:50,480 --> 00:20:52,640
Labeling is no longer just an available feature.

394
00:20:52,640 --> 00:20:55,480
It is a mandatory step built into the workflow.

395
00:20:55,480 --> 00:20:59,840
Access reviews aren't just things you hope get scheduled as they are now a hard-coded part

396
00:20:59,840 --> 00:21:01,040
of the data lifecycle.

397
00:21:01,040 --> 00:21:05,400
Your DLP isn't just a background setting you set and forget, but a monitored control

398
00:21:05,400 --> 00:21:08,160
aligned strictly to your classification levels.

399
00:21:08,160 --> 00:21:12,200
This is no longer assembled in a state of panic before a deadline because it is produced

400
00:21:12,200 --> 00:21:15,440
as a natural byproduct of daily operating discipline.

401
00:21:15,440 --> 00:21:18,600
That difference is huge for the long term health of the business.

402
00:21:18,600 --> 00:21:22,960
Once automation handles the repeatable governance actions, the human role in the system changes

403
00:21:22,960 --> 00:21:23,960
for the better.

404
00:21:23,960 --> 00:21:27,720
People stop burning their limited energy on chasing down approvals, reminding owners

405
00:21:27,720 --> 00:21:31,440
of their duties or reconstructing history from old emails.

406
00:21:31,440 --> 00:21:35,140
They can finally focus on the harder strategic questions like whether a specific control

407
00:21:35,140 --> 00:21:39,100
still fits the current risk or if a particular exception is truly justified.

408
00:21:39,100 --> 00:21:42,900
This is what mature governance work is supposed to look like and it definitely doesn't involve

409
00:21:42,900 --> 00:21:44,260
spreadsheet archaeology.

410
00:21:44,260 --> 00:21:50,100
The metric layer also undergoes a transformation at level 400, back at level 300 metrics existed,

411
00:21:50,100 --> 00:21:54,060
but they often had to compete with loud opinions or gut feelings during meetings.

412
00:21:54,060 --> 00:21:57,820
At level 400 those metrics are trusted enough to drive real business decisions.

413
00:21:57,820 --> 00:22:01,260
This doesn't mean every single data point is perfect, but it means the organization has

414
00:22:01,260 --> 00:22:05,020
enough confidence in the measurement model to act on what the dashboard says.

415
00:22:05,020 --> 00:22:09,420
This can look at audit readiness, label coverage and remediation speed to steer the environment

416
00:22:09,420 --> 00:22:10,740
with precision.

417
00:22:10,740 --> 00:22:14,780
Governance finally becomes measurable under pressure, which is the real shift in perspective.

418
00:22:14,780 --> 00:22:18,740
You are no longer asking your team if they think things are working correctly.

419
00:22:18,740 --> 00:22:23,140
Instead you are looking at the data to see exactly where the system is drifting and deciding

420
00:22:23,140 --> 00:22:25,100
what needs to be changed next.

421
00:22:25,100 --> 00:22:29,780
Audit behavior is one of the clearest signs that a tenant has reached this level of maturity.

422
00:22:29,780 --> 00:22:33,380
Preparation time drops significantly not because the auditors got any easier on you, but

423
00:22:33,380 --> 00:22:36,260
because the evidence is already baked into the operating model.

424
00:22:36,260 --> 00:22:40,700
The organization knows exactly where its records live and who owns every piece of the puzzle.

425
00:22:40,700 --> 00:22:45,180
It can demonstrate control execution without having to rebuild a history of events from

426
00:22:45,180 --> 00:22:47,220
old inboxes and file exports.

427
00:22:47,220 --> 00:22:50,220
Audit readiness moves from a seasonal scramble to a boring routine.

428
00:22:50,220 --> 00:22:54,260
In practical terms this is where organizations move from six weeks of preparation down to

429
00:22:54,260 --> 00:22:57,500
a single week and some even become close to continuously ready.

430
00:22:57,500 --> 00:23:01,380
This isn't just a compliance win, it is a massive amount of business capacity being

431
00:23:01,380 --> 00:23:02,660
returned to the system.

432
00:23:02,660 --> 00:23:04,820
Now map that same logic to your data layer.

433
00:23:04,820 --> 00:23:10,860
At level 400 you are labeling, DLP access boundaries and life cycle management start behaving

434
00:23:10,860 --> 00:23:15,300
like a single operating model rather than four separate IT initiatives.

435
00:23:15,300 --> 00:23:19,620
This alignment is vital because unintended data exposure rarely comes from one dramatic

436
00:23:19,620 --> 00:23:20,620
system failure.

437
00:23:20,620 --> 00:23:25,180
It usually comes from small quiet disconnects between different controls, like broad permissions

438
00:23:25,180 --> 00:23:28,220
in one folder combined with weak labeling in another.

439
00:23:28,220 --> 00:23:32,580
Level 400 reduces that fragmentation so data exposure drops significantly.

440
00:23:32,580 --> 00:23:36,980
This key behavior becomes harder to sustain because the architecture itself makes it difficult

441
00:23:36,980 --> 00:23:39,260
not because your user suddenly became perfect.

442
00:23:39,260 --> 00:23:42,300
That same pattern shows up when you look at copilot readiness.

443
00:23:42,300 --> 00:23:47,180
At this level AI becomes much easier to trust because the underlying data state is finally

444
00:23:47,180 --> 00:23:48,740
organized and visible.

445
00:23:48,740 --> 00:23:53,300
Your AI outputs get better because the grounding data is higher quality and your security

446
00:23:53,300 --> 00:23:56,820
teams feel more confident because permissions are actually explainable.

447
00:23:56,820 --> 00:24:00,940
Business leaders can move faster because they aren't guessing whether AI is surfacing

448
00:24:00,940 --> 00:24:02,420
unmanaged risks.

449
00:24:02,420 --> 00:24:05,300
This is the point where governance starts creating business velocity.

450
00:24:05,300 --> 00:24:09,060
It doesn't do this by removing controls but by making those controls consistent enough

451
00:24:09,060 --> 00:24:12,900
that the business can move without second guessing the environment every day.

452
00:24:12,900 --> 00:24:17,780
From a systems perspective that is the true meaning of level 400, structural resilience.

453
00:24:17,780 --> 00:24:21,860
The tenant can absorb pressure, staff changes and rapid growth without reverting to manual

454
00:24:21,860 --> 00:24:22,860
heroics.

455
00:24:22,860 --> 00:24:25,100
Governance is no longer interrupting the work.

456
00:24:25,100 --> 00:24:29,340
It is enabling stable work to happen at a massive scale.

457
00:24:29,340 --> 00:24:31,540
Level 500 optimised governance.

458
00:24:31,540 --> 00:24:33,740
Level 500 is a different beast entirely.

459
00:24:33,740 --> 00:24:37,500
It isn't about reaching a state of perfection where risk disappears because that doesn't

460
00:24:37,500 --> 00:24:38,820
exist in the real world.

461
00:24:38,820 --> 00:24:42,860
It is different because governance becomes a living, breathing, operating discipline instead

462
00:24:42,860 --> 00:24:44,420
of just a static set of controls.

463
00:24:44,420 --> 00:24:46,700
The organisation is struggling to maintain.

464
00:24:46,700 --> 00:24:51,140
While level 400 makes the environment behave predictably, level 500 is where the organisation

465
00:24:51,140 --> 00:24:53,940
gets good at improving that behaviour on purpose.

466
00:24:53,940 --> 00:24:57,500
Continuous improvement stops being a corporate slogan and becomes the actual way the tenant

467
00:24:57,500 --> 00:24:58,500
is managed.

468
00:24:58,500 --> 00:25:02,940
Controls are reviewed regularly, metrics are tuned for better accuracy and exceptions are

469
00:25:02,940 --> 00:25:06,940
analysed for broader patterns rather than being resolved one by one.

470
00:25:06,940 --> 00:25:11,300
Drift is treated as a permanent operating concern that requires constant attention rather

471
00:25:11,300 --> 00:25:12,820
than a surprising failure.

472
00:25:12,820 --> 00:25:17,620
This matters because Microsoft 365 is a moving target that changes every single week.

473
00:25:17,620 --> 00:25:21,820
The platform evolves, the business pivots and your data patterns shift as people adopt

474
00:25:21,820 --> 00:25:22,820
new AI tools.

475
00:25:22,820 --> 00:25:27,100
If your governance only works in a frozen environment, it doesn't actually work at all.

476
00:25:27,100 --> 00:25:30,940
Level 500 accepts this reality and builds the entire system around the idea of constant

477
00:25:30,940 --> 00:25:31,940
change.

478
00:25:31,940 --> 00:25:36,220
This is also where governance aligns much more clearly with your actual business strategy

479
00:25:36,220 --> 00:25:37,340
and risk appetite.

480
00:25:37,340 --> 00:25:41,740
At lower levels of maturity, organisations usually ask what controls they are allowed to

481
00:25:41,740 --> 00:25:42,820
apply.

482
00:25:42,820 --> 00:25:47,420
At level 500 they ask what business risk they are trying to manage and what control design

483
00:25:47,420 --> 00:25:49,860
will support the speed they need to stay competitive.

484
00:25:49,860 --> 00:25:53,580
That is a much more sophisticated question for a leadership team to tackle.

485
00:25:53,580 --> 00:25:57,180
This is no longer just reacting to the latest software updates.

486
00:25:57,180 --> 00:25:59,700
It is intentionally shaping the business reality.

487
00:25:59,700 --> 00:26:03,260
You see this most clearly in how decisions are made at the executive level.

488
00:26:03,260 --> 00:26:06,780
Leaders can decide exactly where they need heavy restrictions and where they can afford

489
00:26:06,780 --> 00:26:10,220
more flexibility because they trust the data behind those choices.

490
00:26:10,220 --> 00:26:14,820
The environment is no longer governed by a blanket sense of generic caution but by informed

491
00:26:14,820 --> 00:26:15,820
trade-offs.

492
00:26:15,820 --> 00:26:19,420
Cross-process coordination also reaches a new peak at the stage of the journey.

493
00:26:19,420 --> 00:26:24,860
At lower levels your life cycle protection and AI controls often mature at completely different

494
00:26:24,860 --> 00:26:26,820
speeds leading to gaps in your defence.

495
00:26:26,820 --> 00:26:29,860
At level 500 these dependencies are managed with deliberate care.

496
00:26:29,860 --> 00:26:34,940
A change in how you classify data is immediately reflected in how your DLP is tuned and an

497
00:26:34,940 --> 00:26:38,980
ownership gap in a department triggers an automatic life cycle review.

498
00:26:38,980 --> 00:26:43,980
Every co-pilot roll-out decision is connected back to data quality and permission boundaries.

499
00:26:43,980 --> 00:26:47,460
The operating model becomes truly integrated rather than just being a collection of separate

500
00:26:47,460 --> 00:26:48,980
security features.

501
00:26:48,980 --> 00:26:52,460
And validation also becomes a normal part of the rhythm at this level.

502
00:26:52,460 --> 00:26:57,620
This might mean benchmarking your setup against international standards like ISO 27001

503
00:26:57,620 --> 00:26:59,780
or bringing in external experts for a fresh perspective.

504
00:26:59,780 --> 00:27:04,300
It could also mean having internal teams validate whether your controls are actually effective

505
00:27:04,300 --> 00:27:07,700
instead of just assuming they work because the on switch is flipped.

506
00:27:07,700 --> 00:27:11,900
The organization stops trusting itself blindly and starts creating feedback loops that are

507
00:27:11,900 --> 00:27:15,100
strong enough to challenge its own internal assumptions.

508
00:27:15,100 --> 00:27:19,060
Digital systems spend all their time defending their original design but optimized systems

509
00:27:19,060 --> 00:27:21,980
are constantly testing that design to find the breaking points.

510
00:27:21,980 --> 00:27:26,020
This is also where AI and automation start improving the governance model itself rather

511
00:27:26,020 --> 00:27:28,420
than just the business processes around it.

512
00:27:28,420 --> 00:27:31,820
Power Automate is no longer just sending out simple approval reminders.

513
00:27:31,820 --> 00:27:35,540
It becomes the plumbing for how evidence is gathered and how control maintenance is refined

514
00:27:35,540 --> 00:27:36,540
over time.

515
00:27:36,540 --> 00:27:40,180
Your tools start acting less like a list of features and more like a sophisticated feedback

516
00:27:40,180 --> 00:27:41,180
engine.

517
00:27:41,180 --> 00:27:45,500
Video isn't just there to enforce a policy, it is feeding you visibility into oversharing

518
00:27:45,500 --> 00:27:49,020
and AI risk patterns that help your team tune the environment.

519
00:27:49,020 --> 00:27:53,220
This is why very few organizations truly live at level 500 across their entire digital

520
00:27:53,220 --> 00:27:54,220
estate.

521
00:27:54,220 --> 00:27:57,820
They might reach this peak in specific high stakes domains like a legal environment or a

522
00:27:57,820 --> 00:27:59,460
finance department.

523
00:27:59,460 --> 00:28:03,420
Achieving this level of optimization across an entire enterprise is difficult because

524
00:28:03,420 --> 00:28:07,460
every messy business process eventually shows up in the control layer.

525
00:28:07,460 --> 00:28:10,500
Level 500 has a way of exposing those hidden inefficiencies.

526
00:28:10,500 --> 00:28:15,180
It rewards organizations that have the discipline to think architecturally across ownership,

527
00:28:15,180 --> 00:28:17,100
measurement and life cycle management.

528
00:28:17,100 --> 00:28:21,260
These shouldn't be viewed as separate work streams but as parts of one cohesive operating

529
00:28:21,260 --> 00:28:22,260
system.

530
00:28:22,260 --> 00:28:25,380
The useful question to ask yourself here isn't whether your system is perfect because that

531
00:28:25,380 --> 00:28:26,820
is an unrealistic goal.

532
00:28:26,820 --> 00:28:31,180
The real question is where you actually stand today and what kind of environment your current

533
00:28:31,180 --> 00:28:32,500
system is producing.

534
00:28:32,500 --> 00:28:36,540
If you audited your structural resilience today, would you find a system designed to sustain

535
00:28:36,540 --> 00:28:39,660
your growth or one that is slowly draining your resources?

536
00:28:39,660 --> 00:28:41,260
The 5 question maturity check.

537
00:28:41,260 --> 00:28:44,300
At this point you might be thinking this all sounds useful but you probably don't want

538
00:28:44,300 --> 00:28:48,180
to run a 6 week assessment project just to find your place on the map.

539
00:28:48,180 --> 00:28:49,580
Here is the shortcut nobody teaches.

540
00:28:49,580 --> 00:28:53,820
You do not need a massive maturity exercise to understand where you stand because you really

541
00:28:53,820 --> 00:28:55,540
only need 5 honest questions.

542
00:28:55,540 --> 00:28:59,580
I am not talking about aspirational policy answers that look good on paper but the actual

543
00:28:59,580 --> 00:29:02,340
operating answers that define your daily reality.

544
00:29:02,340 --> 00:29:05,740
If you can answer these 5 clearly, your true maturity level usually becomes visible

545
00:29:05,740 --> 00:29:06,740
very fast.

546
00:29:06,740 --> 00:29:08,260
Question 1 is ownership.

547
00:29:08,260 --> 00:29:12,100
Because every critical workspace and dataset have a clearly assigned owner right now, I

548
00:29:12,100 --> 00:29:16,860
don't mean in theory or buried somewhere in an old register that nobody looks at anymore.

549
00:29:16,860 --> 00:29:21,340
If a high-risk team, SharePointside or dataset creates a massive problem tomorrow morning,

550
00:29:21,340 --> 00:29:23,660
can you point to one accountable owner without a debate?

551
00:29:23,660 --> 00:29:27,380
When the answer is a firm, yes, that is a strong sign of structural maturity.

552
00:29:27,380 --> 00:29:33,140
If the answer is only partially, meaning some areas are clear while others stay fuzzy,

553
00:29:33,140 --> 00:29:37,220
you are likely sitting in that middle territory between level 200 and 300.

554
00:29:37,220 --> 00:29:41,020
But if the answer is no, you are dealing with reactive governance and it doesn't matter

555
00:29:41,020 --> 00:29:43,140
what you call it on your slide decks.

556
00:29:43,140 --> 00:29:47,380
Unclear ownership is never just a small administrative issue because it means your accountability

557
00:29:47,380 --> 00:29:49,380
is social rather than structural.

558
00:29:49,380 --> 00:29:50,900
Question 2 is data visibility.

559
00:29:50,900 --> 00:29:54,740
Do you know exactly what percentage of your sensitive data is labelled and protected?

560
00:29:54,740 --> 00:29:56,660
This needs to be measured not guessed.

561
00:29:56,660 --> 00:30:01,020
This is where a lot of leaders get uncomfortable because they know a labeling policy exists.

562
00:30:01,020 --> 00:30:04,300
But they have no idea if the estate is actually behaving accordingly.

563
00:30:04,300 --> 00:30:08,380
They have the rules, but they lack the telemetry to see if those rules are being followed

564
00:30:08,380 --> 00:30:09,380
across the board.

565
00:30:09,380 --> 00:30:12,940
If you can answer this with confidence and hard evidence, it points toward a much higher

566
00:30:12,940 --> 00:30:13,940
level of maturity.

567
00:30:13,940 --> 00:30:17,340
When you only have a rough estimate, you are still in the developing stages.

568
00:30:17,340 --> 00:30:22,020
If the number is unknown, then the organization is managing its exposure through pure assumption,

569
00:30:22,020 --> 00:30:25,460
and we all know that assumption is not a functional control model.

570
00:30:25,460 --> 00:30:26,980
Question 3 is governance automation.

571
00:30:26,980 --> 00:30:31,020
Are your key controls like labeling, access, reviews and lifecycle handling automated?

572
00:30:31,020 --> 00:30:32,780
Or are they still mostly manual?

573
00:30:32,780 --> 00:30:36,620
This question matters because it reveals exactly where your security behavior still depends

574
00:30:36,620 --> 00:30:37,620
on human memory.

575
00:30:37,620 --> 00:30:41,580
If your core controls are automated and consistently triggered through the operating model, you are

576
00:30:41,580 --> 00:30:43,580
moving into level 400 behavior.

577
00:30:43,580 --> 00:30:47,660
When it's a mix of some automation and a lot of spreadsheet coordination, you are in

578
00:30:47,660 --> 00:30:49,780
that classic, messy, middle state.

579
00:30:49,780 --> 00:30:53,780
If it is mostly manual, then your governance depends on individual effort more than system

580
00:30:53,780 --> 00:30:58,100
design, which means fragility is built into the very foundation of how you work.

581
00:30:58,100 --> 00:30:59,580
Question 4 is audit readiness.

582
00:30:59,580 --> 00:31:03,420
Would you produce meaningful audit evidence within a few days instead of a few weeks?

583
00:31:03,420 --> 00:31:06,860
This is one of my favorite questions because it cuts through maturity theatre faster than

584
00:31:06,860 --> 00:31:07,940
almost anything else.

585
00:31:07,940 --> 00:31:12,220
If the evidence already exists in a usable form, your operating model is probably doing

586
00:31:12,220 --> 00:31:13,420
the real work for you.

587
00:31:13,420 --> 00:31:18,060
If your answer is probably, or we could if we tried hard enough, then what you actually

588
00:31:18,060 --> 00:31:19,740
have is conditional readiness.

589
00:31:19,740 --> 00:31:23,740
And if the answer is a flat no, then the system is still trying to reconstruct control after

590
00:31:23,740 --> 00:31:26,100
the fact, which isn't just an audit weakness.

591
00:31:26,100 --> 00:31:28,460
It is a fundamental operating model weakness.

592
00:31:28,460 --> 00:31:30,220
Question 5 is the most important one of all.

593
00:31:30,220 --> 00:31:33,380
Does your governance make the right behavior the easiest path for the user?

594
00:31:33,380 --> 00:31:36,180
That question gets to the heart of the entire system.

595
00:31:36,180 --> 00:31:39,660
Mature environments are designed to reduce the dependency on perfect human judgment by

596
00:31:39,660 --> 00:31:42,180
guiding action through defaults and automation.

597
00:31:42,180 --> 00:31:44,580
In mature environments do the exact opposite.

598
00:31:44,580 --> 00:31:48,100
They ask people to remember too many rules, interpret too many policies and compensate

599
00:31:48,100 --> 00:31:49,900
for the system's failures far too often.

600
00:31:49,900 --> 00:31:54,220
So if the right behavior is easy, by default, you are operating in a much stronger maturity

601
00:31:54,220 --> 00:31:55,220
zone.

602
00:31:55,220 --> 00:31:57,940
If that only happens sometimes, your performance is still uneven.

603
00:31:57,940 --> 00:32:02,060
If the risky path is still the fastest way to get work done, your governance is not mature

604
00:32:02,060 --> 00:32:04,660
and it doesn't matter how many policies say otherwise.

605
00:32:04,660 --> 00:32:06,620
Now score yourself simply.

606
00:32:06,620 --> 00:32:10,820
If you answered mostly yes, you are likely operating around level 400.

607
00:32:10,820 --> 00:32:14,740
If the answers were mixed, you are probably in that level 200 to 300 range.

608
00:32:14,740 --> 00:32:18,740
If the answers were mostly no, you are closer to level 100 than most leadership teams ever

609
00:32:18,740 --> 00:32:19,740
want to admit.

610
00:32:19,740 --> 00:32:21,220
And that is perfectly fine, by the way.

611
00:32:21,220 --> 00:32:24,300
The goal here isn't to sound advanced or impress aboard.

612
00:32:24,300 --> 00:32:27,620
The goal is to see the reality of your situation clearly.

613
00:32:27,620 --> 00:32:31,260
As you can see your real maturity, you can stop funding comfort signals and start fixing

614
00:32:31,260 --> 00:32:33,540
the structural weaknesses that actually matter.

615
00:32:33,540 --> 00:32:35,420
That is the real value of this diagnostic.

616
00:32:35,420 --> 00:32:40,700
It turns maturity from a branding exercise into a necessary reality check.

617
00:32:40,700 --> 00:32:45,540
So before you open another dashboard or announce another AI rollout, ask those five questions

618
00:32:45,540 --> 00:32:48,860
and answer them like an auditor would, not like a steering committee would.

619
00:32:48,860 --> 00:32:49,860
Pattern 1.

620
00:32:49,860 --> 00:32:51,660
Audit time reveals maturity fast.

621
00:32:51,660 --> 00:32:55,380
Now, let's map that five question check to one of the clearest signals I see across

622
00:32:55,380 --> 00:32:56,380
different environments.

623
00:32:56,380 --> 00:32:58,020
So I'm talking about audit time.

624
00:32:58,020 --> 00:33:01,820
If you want a fast way to understand if your governance is real, look at how long it takes

625
00:33:01,820 --> 00:33:03,460
to prepare for an audit request.

626
00:33:03,460 --> 00:33:07,380
Don't look at how confidently people talk about being ready or how many documents they've

627
00:33:07,380 --> 00:33:10,700
saved, but look at how long it takes to produce usable evidence.

628
00:33:10,700 --> 00:33:13,180
Because audit time is a compression test for your business.

629
00:33:13,180 --> 00:33:18,540
It forces the system to reveal if ownership is clear, if evidence trails exist, and if

630
00:33:18,540 --> 00:33:21,540
the environment can explain itself without a total panic.

631
00:33:21,540 --> 00:33:25,580
In low maturity environments, audit preparation usually takes four to six weeks and sometimes

632
00:33:25,580 --> 00:33:27,260
it takes even longer than that.

633
00:33:27,260 --> 00:33:29,900
That time isn't spent on thoughtful analysis or strategy.

634
00:33:29,900 --> 00:33:31,140
It is spent chasing.

635
00:33:31,140 --> 00:33:35,380
You are chasing screenshots, chasing approvals and chasing admins who might know where a specific

636
00:33:35,380 --> 00:33:36,580
report lives.

637
00:33:36,580 --> 00:33:40,860
You end up chasing business owners who were never clearly assigned in the first place trying

638
00:33:40,860 --> 00:33:43,540
to find a history that should already exist but doesn't.

639
00:33:43,540 --> 00:33:46,980
That is why I always say that audit pain is almost never just an audit problem.

640
00:33:46,980 --> 00:33:49,060
It is an operating model problem made visible.

641
00:33:49,060 --> 00:33:50,260
The reason is simple.

642
00:33:50,260 --> 00:33:54,300
If your governance is mature, evidence is just a natural byproduct of your normal operations.

643
00:33:54,300 --> 00:33:58,300
If your governance is immature, evidence becomes a massive reconstruction project.

644
00:33:58,300 --> 00:34:00,780
Those are two completely different business realities.

645
00:34:00,780 --> 00:34:04,820
In stronger environments, audit preparation drops down to just one or two weeks and some

646
00:34:04,820 --> 00:34:07,620
organizations stay close to being continuously ready.

647
00:34:07,620 --> 00:34:11,100
This isn't because the auditors are asking easier questions but because the system already

648
00:34:11,100 --> 00:34:14,300
knows where the ownership sits and how exceptions were handled.

649
00:34:14,300 --> 00:34:17,060
That changes the whole experience for everyone involved.

650
00:34:17,060 --> 00:34:21,740
The conversation shifts from "Can we rebuild enough proof in time to what does this evidence

651
00:34:21,740 --> 00:34:24,020
tell us about our control quality?"

652
00:34:24,020 --> 00:34:27,020
That is a much more mature posture for a leader to take.

653
00:34:27,020 --> 00:34:30,020
And it is worth pausing to look at what actually drives that difference.

654
00:34:30,020 --> 00:34:33,660
It is usually not one magic tool or a single piece of software.

655
00:34:33,660 --> 00:34:38,420
It is the combination of clear ownership, retention discipline, evidence trails and automation

656
00:34:38,420 --> 00:34:40,060
around repeatable controls.

657
00:34:40,060 --> 00:34:43,900
When those four elements are present, the organization finally stops relying on human

658
00:34:43,900 --> 00:34:45,500
memory to stay compliant.

659
00:34:45,500 --> 00:34:46,700
That is the big breakthrough.

660
00:34:46,700 --> 00:34:49,100
Because memory is not a scalable form of governance.

661
00:34:49,100 --> 00:34:51,580
Memory is just a way to survive temporarily.

662
00:34:51,580 --> 00:34:55,580
I have seen environments where one senior admin held half the audit narrative inside the

663
00:34:55,580 --> 00:34:56,580
head.

664
00:34:56,580 --> 00:34:57,900
They knew which report to run.

665
00:34:57,900 --> 00:35:01,820
They knew which exception had a verbal approval and they knew which process bypassed the official

666
00:35:01,820 --> 00:35:03,180
rules two years ago.

667
00:35:03,180 --> 00:35:07,380
That kind of knowledge can hold a system together for a while but from a structural perspective

668
00:35:07,380 --> 00:35:11,300
it is a single point of failure and the audit will expose that failure immediately.

669
00:35:11,300 --> 00:35:15,580
On the other side, higher maturity environments don't need a hero to explain what is happening.

670
00:35:15,580 --> 00:35:17,380
The environment explains itself.

671
00:35:17,380 --> 00:35:21,980
It isn't always perfect but it is reliable enough that the organization can move forward with

672
00:35:21,980 --> 00:35:23,260
real confidence.

673
00:35:23,260 --> 00:35:25,820
That is a massive difference in executive terms.

674
00:35:25,820 --> 00:35:29,300
Every extra week spent on audit preparation pulls your most skilled people away from the

675
00:35:29,300 --> 00:35:31,460
work that actually grows the business.

676
00:35:31,460 --> 00:35:35,020
Security slows down, collaboration teams get interrupted and business owners get dragged

677
00:35:35,020 --> 00:35:37,100
into endless clarification loops.

678
00:35:37,100 --> 00:35:40,940
Leadership attention gets consumed by the act of proving control instead of actually improving

679
00:35:40,940 --> 00:35:41,940
it.

680
00:35:41,940 --> 00:35:46,180
So when you see long audit cycles, don't just frame them as unfortunate overhead.

681
00:35:46,180 --> 00:35:47,180
Look at them for what they are.

682
00:35:47,180 --> 00:35:51,620
They are telling you that your governance model is incredibly expensive to operate.

683
00:35:51,620 --> 00:35:55,660
Across the maturity journey that expense drops fast once you replace human compensation

684
00:35:55,660 --> 00:35:57,060
with actual structure.

685
00:35:57,060 --> 00:36:02,620
A 50 to 70% reduction in audit effort is a massive maturity signal because it reflects a deeper

686
00:36:02,620 --> 00:36:04,100
order underneath the surface.

687
00:36:04,100 --> 00:36:07,900
It tells you that your evidence is less scattered and your ownership is no longer ambiguous.

688
00:36:07,900 --> 00:36:12,300
It means your control execution doesn't depend on manual reconstruction and your decision

689
00:36:12,300 --> 00:36:14,180
history is finally easy to trust.

690
00:36:14,180 --> 00:36:17,260
So if you are a leader listening to this here is the practical takeaway.

691
00:36:17,260 --> 00:36:18,980
Ask your team one simple question.

692
00:36:18,980 --> 00:36:22,780
If an audit request landed on your desk tomorrow, how many weeks would it take us to produce

693
00:36:22,780 --> 00:36:24,380
evidence we actually trust?

694
00:36:24,380 --> 00:36:26,220
Then listen very carefully to that answer.

695
00:36:26,220 --> 00:36:29,740
If the answer is measured in weeks, you are not looking at an audit issue.

696
00:36:29,740 --> 00:36:34,060
You are looking at a system that still depends on hidden labor to function and hidden labor

697
00:36:34,060 --> 00:36:35,340
is always a warning sign.

698
00:36:35,340 --> 00:36:38,940
It means the business is paying a high price for fragility every time the pressure shows

699
00:36:38,940 --> 00:36:39,940
up.

700
00:36:39,940 --> 00:36:41,980
Audit pain is just the visible part of the problem.

701
00:36:41,980 --> 00:36:46,420
But the biggest story is what that pain reveals about how your environment actually runs.

702
00:36:46,420 --> 00:36:47,420
Pattern 2.

703
00:36:47,420 --> 00:36:50,100
Data exposure is usually a design problem.

704
00:36:50,100 --> 00:36:54,540
The second pattern we identified showed up even faster than audit pain once we looked closely

705
00:36:54,540 --> 00:36:55,700
at the mechanics.

706
00:36:55,700 --> 00:36:59,740
Most leaders treat data exposure like a moral failing or a lapse in judgment, but the reality

707
00:36:59,740 --> 00:37:00,980
is much simpler.

708
00:37:00,980 --> 00:37:03,700
Data exposure is usually not a user morality problem.

709
00:37:03,700 --> 00:37:05,420
It is a design problem.

710
00:37:05,420 --> 00:37:09,780
That distinction matters because many organizations still respond to oversharing as if the core issue

711
00:37:09,780 --> 00:37:11,500
is individual carelessness.

712
00:37:11,500 --> 00:37:15,480
They point to someone who shared a link to broadly stored sensitive content in the wrong

713
00:37:15,480 --> 00:37:18,420
folder or simply forgot to apply a security label.

714
00:37:18,420 --> 00:37:22,100
While those individual actions happen every day, the more important question for any system

715
00:37:22,100 --> 00:37:26,300
architect is why the environment made it so easy to fail in the first place.

716
00:37:26,300 --> 00:37:30,880
If broad access is the default setting and inheritance has been drifting for years, then

717
00:37:30,880 --> 00:37:32,660
oversharing isn't a surprise.

718
00:37:32,660 --> 00:37:37,300
When sensitive content sits in unlabeled libraries and external sharing rules are managed

719
00:37:37,300 --> 00:37:41,380
by local habit rather than central logic, the system is simply doing what it was built

720
00:37:41,380 --> 00:37:42,380
to do.

721
00:37:42,380 --> 00:37:43,380
It is a system outcome.

722
00:37:43,380 --> 00:37:47,620
Low maturity tendons almost always share the same structural shape, and they rely on broad

723
00:37:47,620 --> 00:37:51,620
permissions and weak visibility, which is a dangerous combination when paired with unclear

724
00:37:51,620 --> 00:37:54,860
ownership and old workspaces that never got cleaned up.

725
00:37:54,860 --> 00:37:58,620
Because the environment is sprawling and messy, the organization starts guessing about its

726
00:37:58,620 --> 00:38:01,900
exposure instead of measuring it with any degree of accuracy.

727
00:38:01,900 --> 00:38:05,740
When you guess at exposure, your response becomes selective and reactive.

728
00:38:05,740 --> 00:38:09,340
The loudest issue gets the most attention, the most visible problem gets a meeting, and

729
00:38:09,340 --> 00:38:13,260
the structural flaw that caused both remains untouched in the background.

730
00:38:13,260 --> 00:38:16,780
High maturity tendons look different because they are built with intention rather than

731
00:38:16,780 --> 00:38:18,100
left to grow wild.

732
00:38:18,100 --> 00:38:21,220
In these structured environments, workspaces aren't just spawned.

733
00:38:21,220 --> 00:38:24,420
They are created with clear ownership and high label coverage.

734
00:38:24,420 --> 00:38:28,860
Access boundaries are easy to explain to an auditor, and life cycle logic removes stale

735
00:38:28,860 --> 00:38:32,540
data before it turns into a quiet risk for the company.

736
00:38:32,540 --> 00:38:37,220
Once those design choices start aligning, unintended exposure drops mechanically rather than

737
00:38:37,220 --> 00:38:39,580
through some sudden burst of employee discipline.

738
00:38:39,580 --> 00:38:43,420
I'm always careful with awareness campaigns because while they can help, they don't solve

739
00:38:43,420 --> 00:38:44,900
architectural problems.

740
00:38:44,900 --> 00:38:48,420
Awareness does not redesign inheritance, it doesn't clean up dormant sites, and it certainly

741
00:38:48,420 --> 00:38:52,300
doesn't make security labels measurable across a million files.

742
00:38:52,300 --> 00:38:56,780
Architecture does that work, and across stronger environments we see unintended exposure dropped

743
00:38:56,780 --> 00:39:00,500
by 30 to 60 percent, once design and measurement improved together.

744
00:39:00,500 --> 00:39:03,620
Now map that logic directly to Microsoft 365.

745
00:39:03,620 --> 00:39:07,680
SharePoint teams and OneDrive are incredibly powerful collaboration layers, but power without

746
00:39:07,680 --> 00:39:10,820
structure turns into permission drift almost immediately.

747
00:39:10,820 --> 00:39:15,620
One owner leaves the company a team changes its purpose, or a site is copied from an old

748
00:39:15,620 --> 00:39:20,460
flawed pattern, and suddenly you have a business critical workspace with no meaningful controls.

749
00:39:20,460 --> 00:39:25,580
This is where a tool like Perview becomes important, but only if you use it to create a measurable

750
00:39:25,580 --> 00:39:27,940
environment rather than a decorative one.

751
00:39:27,940 --> 00:39:32,140
If labels exist but coverage is low, or if DLP is turned on but ownership is weak, the

752
00:39:32,140 --> 00:39:34,020
exposure remains exactly where it was.

753
00:39:34,020 --> 00:39:38,100
The mature move isn't just turning on features, it's aligning those features with workspace design

754
00:39:38,100 --> 00:39:41,580
and lifecycle review to actually reduce your surface area.

755
00:39:41,580 --> 00:39:46,780
Once you start seeing exposure as a design output, the conversation in the boardroom changes.

756
00:39:46,780 --> 00:39:50,540
You stop blaming users first, you stop assuming training is the magic answer, and you stop

757
00:39:50,540 --> 00:39:54,460
acting surprised every time a sensitive document ends up in the wrong hands.

758
00:39:54,460 --> 00:39:58,340
Instead you ask where the broad defaults are hiding, and where you are asking people to

759
00:39:58,340 --> 00:40:02,620
compensate for a structural weakness that shouldn't exist.

760
00:40:02,620 --> 00:40:03,620
Pattern 3.

761
00:40:03,620 --> 00:40:06,020
Copilot readiness depends on GRC maturity.

762
00:40:06,020 --> 00:40:09,220
Now we get to the part of the conversation that everyone wants to start with, even though

763
00:40:09,220 --> 00:40:11,580
it usually belongs much later in the process.

764
00:40:11,580 --> 00:40:16,860
There is massive executive pressure right now to move faster on copilot AI agents and automated

765
00:40:16,860 --> 00:40:19,180
workflows, but here is the blunt truth.

766
00:40:19,180 --> 00:40:23,700
Copilot readiness is not mainly a licensing question or a prompt training exercise, it is

767
00:40:23,700 --> 00:40:25,660
a GRC maturity question first.

768
00:40:25,660 --> 00:40:28,620
Copilot does not create a new reality from scratch.

769
00:40:28,620 --> 00:40:31,820
It simply traverses the one you've already built over the last decade.

770
00:40:31,820 --> 00:40:36,420
It uses your existing permissions, your content and your labeling discipline, which means

771
00:40:36,420 --> 00:40:41,340
whatever is weak in your tenant becomes visible much faster once AI starts moving through

772
00:40:41,340 --> 00:40:42,340
it.

773
00:40:42,340 --> 00:40:47,420
Low maturity organizations tend to hit a very specific wall during their copilot rollout.

774
00:40:47,420 --> 00:40:51,460
Security teams get nervous about what the AI might find legal teams slow down every decision

775
00:40:51,460 --> 00:40:56,540
and business leaders ask for a level of confidence that the IT team simply cannot provide.

776
00:40:56,540 --> 00:41:00,740
Users try the tool, get uneven or irrelevant answers and quietly decide the technology isn't

777
00:41:00,740 --> 00:41:02,780
as useful as the marketing promised.

778
00:41:02,780 --> 00:41:06,780
From the outside this looks like a failure of AI adoption, but from a systems perspective

779
00:41:06,780 --> 00:41:09,740
it's usually a governance failure wearing an AI label.

780
00:41:09,740 --> 00:41:13,340
If the data layer isn't trustworthy and the permission model isn't explainable, the

781
00:41:13,340 --> 00:41:16,820
organization does what all immature systems do under uncertainty.

782
00:41:16,820 --> 00:41:18,100
It hesitates.

783
00:41:18,100 --> 00:41:22,380
And hesitation is actually rational because if you can't explain what the AI can see, the

784
00:41:22,380 --> 00:41:23,860
environment isn't ready for scale.

785
00:41:23,860 --> 00:41:26,100
We saw this pattern clearly throughout 2026.

786
00:41:26,100 --> 00:41:29,780
Most copilot deployments that lose momentum do so in the first two or three months because

787
00:41:29,780 --> 00:41:34,540
governance was treated like a one time setup task instead of a living operating function.

788
00:41:34,540 --> 00:41:38,180
One's real work begins, unlabeled content and broad permission stop being theoretical

789
00:41:38,180 --> 00:41:40,620
risks and start being active blockers to progress.

790
00:41:40,620 --> 00:41:43,700
This is why I call copilot a maturity revealer.

791
00:41:43,700 --> 00:41:47,820
At low maturity, the AI experience feels risky and underwhelming at the same time because

792
00:41:47,820 --> 00:41:52,340
no one trusts the boundaries and messy content creates weak grounding.

793
00:41:52,340 --> 00:41:55,980
People don't just need AI to be safe, they need it to be useful and usefulness depends

794
00:41:55,980 --> 00:41:58,740
on governed information just as much as safety does.

795
00:41:58,740 --> 00:42:02,460
Compare that to a high maturity environment where the data estate is structured and sensitive

796
00:42:02,460 --> 00:42:04,260
content is consistently labeled.

797
00:42:04,260 --> 00:42:07,780
In these organization's workspace ownership is clear and life cycle controls have already

798
00:42:07,780 --> 00:42:11,980
removed the noisy stale data that would otherwise confuse an AI model.

799
00:42:11,980 --> 00:42:15,460
When copilot operates here, the outputs improve and the risk conversation changes because

800
00:42:15,460 --> 00:42:19,540
the risk is finally explainable that is a major threshold for leadership to cross.

801
00:42:19,540 --> 00:42:23,340
One security can describe the control stack and compliance can show the evidence trail

802
00:42:23,340 --> 00:42:28,020
AI stops feeling like uncontrolled acceleration and starts feeling like a governed business

803
00:42:28,020 --> 00:42:29,540
capability.

804
00:42:29,540 --> 00:42:34,060
Adoption rises not because of the hype but because there is a foundation of structural trust.

805
00:42:34,060 --> 00:42:38,140
Per view matters immensely here, but only when it is part of a broader operating model

806
00:42:38,140 --> 00:42:42,300
that values measurable coverage over simple feature activation.

807
00:42:42,300 --> 00:42:45,380
Tooling amplifies your existing maturity, it does not replace it.

808
00:42:45,380 --> 00:42:49,020
If you want to know if your organization is truly ready for AI, don't look at the demo,

809
00:42:49,020 --> 00:42:50,980
look at the maturity stack underneath it.

810
00:42:50,980 --> 00:42:54,420
Can you trust your permissions, your labels and your ownership model when something inevitably

811
00:42:54,420 --> 00:42:55,820
needs remediation?

812
00:42:55,820 --> 00:43:00,580
If the answer is mixed, your AI outcomes will be mixed as well and that has a massive implication

813
00:43:00,580 --> 00:43:02,340
for your business velocity.

814
00:43:02,340 --> 00:43:05,900
Maturity is the difference between a fast approval and an expensive license sitting inside

815
00:43:05,900 --> 00:43:09,700
a hesitant organization that doesn't trust its own environment.

816
00:43:09,700 --> 00:43:13,580
When a copilot rollout fails to deliver, don't just ask if the AI was good enough, ask

817
00:43:13,580 --> 00:43:16,500
if the tenant was governable enough to let the AI work.

818
00:43:16,500 --> 00:43:21,900
In the world of Microsoft 365, AI readiness is usually just GRC maturity made impossible

819
00:43:21,900 --> 00:43:22,900
to ignore.

820
00:43:22,900 --> 00:43:26,580
If you audited your structural resilience the same way you audit your systems, would you

821
00:43:26,580 --> 00:43:31,460
find a foundation designed to sustain this new scale or one that is slowly draining your

822
00:43:31,460 --> 00:43:32,460
potential?

823
00:43:32,460 --> 00:43:34,500
Counterintuitive finding one.

824
00:43:34,500 --> 00:43:36,380
Training does not fix governance.

825
00:43:36,380 --> 00:43:40,020
Now here is where a lot of well-meaning organizations burn through their time and budget

826
00:43:40,020 --> 00:43:44,460
without actually changing the result. They see risky behaviour, oversharing or weak label

827
00:43:44,460 --> 00:43:48,300
adoption and the first response is almost always to launch more training.

828
00:43:48,300 --> 00:43:52,500
Leaders call for more awareness, more digital posters and more reminders to follow the process

829
00:43:52,500 --> 00:43:57,500
but they often ignore the underlying system that drives those behaviours in the first place.

830
00:43:57,500 --> 00:44:01,220
I am not dismissing training because people definitely need context to understand what good

831
00:44:01,220 --> 00:44:03,220
looks like and why it matters for the business.

832
00:44:03,220 --> 00:44:07,460
However, training is not the primary fix for a structurally weak environment and it can

833
00:44:07,460 --> 00:44:09,740
never be the sole solution for a design problem.

834
00:44:09,740 --> 00:44:13,580
If the system makes the wrong behaviour easier or faster than the right one then you are

835
00:44:13,580 --> 00:44:17,340
asking training to fight against human nature every single day.

836
00:44:17,340 --> 00:44:20,980
Design usually wins that fight which is a reality many leaders don't like to hear because

837
00:44:20,980 --> 00:44:23,580
training feels responsible and manageable.

838
00:44:23,580 --> 00:44:27,180
You can measure attendance, launch a big campaign and tell the board that something was

839
00:44:27,180 --> 00:44:31,980
done but if the environment stays the same you've only improved awareness while preserving

840
00:44:31,980 --> 00:44:34,140
the exact same path to failure.

841
00:44:34,140 --> 00:44:37,540
That isn't a sign of maturity, it is just a documentation of effort.

842
00:44:37,540 --> 00:44:41,540
Across different tenants I kept seeing the same pattern where users were told to classify

843
00:44:41,540 --> 00:44:45,180
data better while the labels remained incredibly easy to ignore.

844
00:44:45,180 --> 00:44:49,020
People were told to share carefully, yet broad access stayed the default setting in too many

845
00:44:49,020 --> 00:44:52,660
places and the approved governance path was consistently slower than the common work

846
00:44:52,660 --> 00:44:53,660
around.

847
00:44:53,660 --> 00:44:57,300
When a system is under pressure it will always produce people who optimise for speed not

848
00:44:57,300 --> 00:45:00,780
because they are reckless but because the environment rewards the shortcut.

849
00:45:00,780 --> 00:45:03,060
This is where the human blame story usually breaks down.

850
00:45:03,060 --> 00:45:06,580
We say people are being careless or failing to follow the process but often they are just

851
00:45:06,580 --> 00:45:10,220
adapting to a process that is too heavy for the actual pace of their work.

852
00:45:10,220 --> 00:45:13,620
Instead of more awareness what they really need is an environment where safe behaviour

853
00:45:13,620 --> 00:45:15,060
is the easiest path to take.

854
00:45:15,060 --> 00:45:18,780
From a system perspective behaviour isn't driven by access alone.

855
00:45:18,780 --> 00:45:23,740
It is driven by the environment which includes defaults, friction and workflow structure.

856
00:45:23,740 --> 00:45:27,500
Whether a label is prompted or skipped or whether an exception path is fast enough to actually

857
00:45:27,500 --> 00:45:32,380
be used are all system conditions that shape behaviour more reliably than a training session.

858
00:45:32,380 --> 00:45:36,740
I've seen organisations with massive awareness programmes still produce risky patterns because

859
00:45:36,740 --> 00:45:40,500
the tenant kept asking people to manually compensate for weaknesses the architecture should

860
00:45:40,500 --> 00:45:41,500
have handled.

861
00:45:41,500 --> 00:45:45,380
It is unfair to users and represents poor governance to expect people to compensate

862
00:45:45,380 --> 00:45:49,780
indefinitely for a system that allows risk by default sooner or later the volume of

863
00:45:49,780 --> 00:45:53,500
work and the need for convenience will win and leaders shouldn't be surprised when the

864
00:45:53,500 --> 00:45:55,420
same problems keep coming back.

865
00:45:55,420 --> 00:45:59,580
What actually works better is safer defaults, more enforced flows and less optionality around

866
00:45:59,580 --> 00:46:01,020
the controls that actually matter.

867
00:46:01,020 --> 00:46:04,820
If you want label usage to rise don't just explain them better, make labeling a measurable

868
00:46:04,820 --> 00:46:06,500
part of the normal work rhythm.

869
00:46:06,500 --> 00:46:10,780
If you want sharing risk to drop reduce those broad defaults and ensure there is actual

870
00:46:10,780 --> 00:46:12,820
ownership behind access reviews.

871
00:46:12,820 --> 00:46:16,220
Once the environment is fixed training starts helping again because it reinforces a good

872
00:46:16,220 --> 00:46:19,340
system instead of trying to rescue a broken one.

873
00:46:19,340 --> 00:46:23,180
Environment comes first and education comes second because education only compounds when

874
00:46:23,180 --> 00:46:25,820
the operating model actually supports it.

875
00:46:25,820 --> 00:46:30,100
Training can improve a person's judgement but it cannot fix a fundamental design flaw.

876
00:46:30,100 --> 00:46:33,780
If your governance still depends on people remembering the safe thing while the system makes

877
00:46:33,780 --> 00:46:35,260
the risky thing easy.

878
00:46:35,260 --> 00:46:37,860
Your problem isn't educational, it's architectural.

879
00:46:37,860 --> 00:46:41,380
Counter-intuitive finding two more policies often mean less control.

880
00:46:41,380 --> 00:46:44,940
The second mistake looks very responsible on paper which is exactly why it survives for

881
00:46:44,940 --> 00:46:48,060
so long in corporate environments.

882
00:46:48,060 --> 00:46:52,140
When leaders notice inconsistency or exposure they often respond by adding another policy

883
00:46:52,140 --> 00:46:55,580
document, another standard or another layer of interpretation.

884
00:46:55,580 --> 00:46:59,580
The assumption is that if control feels weak then adding more rules must create more

885
00:46:59,580 --> 00:47:02,980
control but in practice this often does the exact opposite.

886
00:47:02,980 --> 00:47:05,420
Policy volume and policy effectiveness are not the same thing.

887
00:47:05,420 --> 00:47:10,020
A policy only creates value if it changes behavior in a reliable way but if it just sits

888
00:47:10,020 --> 00:47:13,620
in a portal and creates a debate every time someone tries to apply it then it is just

889
00:47:13,620 --> 00:47:15,100
administrative weight.

890
00:47:15,100 --> 00:47:18,460
That weight creates friction at the point of execution which is the worst possible place

891
00:47:18,460 --> 00:47:19,860
for a system to slow down.

892
00:47:19,860 --> 00:47:23,620
Low and mid maturity organizations get trapped in the cycle where they write a new rule

893
00:47:23,620 --> 00:47:26,780
for every single control gap or edge case they find.

894
00:47:26,780 --> 00:47:31,140
In the environment is surrounded by policy language that looks mature from the outside but creates

895
00:47:31,140 --> 00:47:34,260
a massive interpretation burden for the people inside.

896
00:47:34,260 --> 00:47:38,340
Team stop asking what the right behavior is and start asking which document applies or

897
00:47:38,340 --> 00:47:40,140
who needs to approve the exception.

898
00:47:40,140 --> 00:47:44,260
Once governance has to be interpreted repeatedly under pressure local work around start multiplying

899
00:47:44,260 --> 00:47:46,740
and this is the hidden cost of policy sprawl.

900
00:47:46,740 --> 00:47:51,300
It doesn't just confuse people it actually decentralizes control by turning it into a matter

901
00:47:51,300 --> 00:47:52,980
of personal judgment.

902
00:47:52,980 --> 00:47:56,020
Different managers will tolerate different shortcuts and different admins will enforce

903
00:47:56,020 --> 00:48:00,460
different versions of the same intent leaving the organization with more language but less

904
00:48:00,460 --> 00:48:01,860
consistency.

905
00:48:01,860 --> 00:48:06,340
More policies often mean less control because excess policy pushes execution back into a state

906
00:48:06,340 --> 00:48:07,340
of ambiguity.

907
00:48:07,340 --> 00:48:11,700
If your team's life cycle depends on three separate documents and a naming guide nobody

908
00:48:11,700 --> 00:48:15,140
remembers then your governance is simply too heavy for normal work.

909
00:48:15,140 --> 00:48:19,740
If your labeling standard requires users to interpret 10 overlapping categories then the

910
00:48:19,740 --> 00:48:22,660
system is scaling hesitation rather than control.

911
00:48:22,660 --> 00:48:26,940
Maturotennins usually look much simpler from the user side which often surprises people who

912
00:48:26,940 --> 00:48:28,820
expect maturity to feel complex.

913
00:48:28,820 --> 00:48:32,500
It feels simpler because the complexity has been absorbed into the design, the defaults

914
00:48:32,500 --> 00:48:36,300
and the automation rather than being pushed onto the users as extra work.

915
00:48:36,300 --> 00:48:40,300
The system carries the burden so the people don't have to which makes consistent execution

916
00:48:40,300 --> 00:48:41,660
much easier to achieve.

917
00:48:41,660 --> 00:48:45,740
Leaders should want a reliable operating model rather than an impressive policy library.

918
00:48:45,740 --> 00:48:50,740
I have seen environments with thick governance packs that had almost no behavioral control

919
00:48:50,740 --> 00:48:54,340
and I've seen others with fewer rules that produced much stronger outcomes.

920
00:48:54,340 --> 00:48:56,300
The difference isn't how much was written.

921
00:48:56,300 --> 00:48:59,580
The difference is whether the rule could actually survive.

922
00:48:59,580 --> 00:49:03,620
Contact with the reality of daily work and you have to ask if the policy can be applied

923
00:49:03,620 --> 00:49:08,100
quickly, if ownership is understood without a debate and if the control can be enforced through

924
00:49:08,100 --> 00:49:09,660
the platform itself.

925
00:49:09,660 --> 00:49:13,300
If the answer is no then adding another document will just create another layer that people

926
00:49:13,300 --> 00:49:15,420
feel they have to root around.

927
00:49:15,420 --> 00:49:18,780
Control doesn't scale with more rules, it scales with better design and fewer decision

928
00:49:18,780 --> 00:49:19,620
points.

929
00:49:19,620 --> 00:49:24,460
When you create operating parts that make compliant behavior easier than creative interpretation,

930
00:49:24,460 --> 00:49:27,380
governance starts to feel lighter even as it becomes stronger.

931
00:49:27,380 --> 00:49:31,180
The fastest way to make this real is to look at a concrete case where that shift actually

932
00:49:31,180 --> 00:49:32,180
happened.

933
00:49:32,180 --> 00:49:33,180
Case study.

934
00:49:33,180 --> 00:49:35,380
From controlled chaos to operational governance.

935
00:49:35,380 --> 00:49:39,580
Let me show you how this looks in one concrete case because this is where maturity stops

936
00:49:39,580 --> 00:49:42,540
being abstract and starts becoming a business reality.

937
00:49:42,540 --> 00:49:47,020
I was looking at an organization with around 6000 users that wasn't chaotic in the obvious

938
00:49:47,020 --> 00:49:50,380
sense nor were they being careless or ignoring governance.

939
00:49:50,380 --> 00:49:54,460
From the outside they looked fairly responsible because policies existed, the compliance team

940
00:49:54,460 --> 00:50:00,460
was engaged and IT had already put real effort into SharePoint teams and purview controls.

941
00:50:00,460 --> 00:50:03,900
If you interviewed the leadership team they would have told you governance was already

942
00:50:03,900 --> 00:50:06,740
in place and in a limited technical sense it was.

943
00:50:06,740 --> 00:50:10,740
But structurally the environment was still sitting in that awkward middle ground between

944
00:50:10,740 --> 00:50:12,500
level 200 and level 300.

945
00:50:12,500 --> 00:50:16,260
It was managed in parts and defined in pockets which meant it was still too dependent

946
00:50:16,260 --> 00:50:19,700
on human coordination to behave predictably under pressure.

947
00:50:19,700 --> 00:50:23,820
That became clear very quickly once we looked at operating behavior instead of declared intent

948
00:50:23,820 --> 00:50:26,740
and ownership was the first fracture line we found.

949
00:50:26,740 --> 00:50:30,500
They were named stakeholders on paper but they lacked a consistently executable ownership

950
00:50:30,500 --> 00:50:32,300
model that people could actually follow.

951
00:50:32,300 --> 00:50:36,660
Some workspaces had strong local ownership while others had assumed ownership and some

952
00:50:36,660 --> 00:50:41,220
sensitive areas had clear business accountability while others were effectively being held together

953
00:50:41,220 --> 00:50:45,860
by IT and compliance trying to compensate for the absence of a real owner.

954
00:50:45,860 --> 00:50:49,660
So when reviews were needed when exceptions appeared or when risk decisions had to be made the

955
00:50:49,660 --> 00:50:54,340
path wasn't clean it was negotiated every single time and that slows everything down.

956
00:50:54,340 --> 00:50:58,500
The second issue was control consistency because while labeling existed the actual coverage

957
00:50:58,500 --> 00:50:59,980
was remarkably low.

958
00:50:59,980 --> 00:51:04,540
Roughly 25% of the relevant data estate was labeled in a way the organization could meaningfully

959
00:51:04,540 --> 00:51:09,380
rely on but that doesn't mean the other 75% was all sensitive and unprotected.

960
00:51:09,380 --> 00:51:13,500
It simply means visibility was too weak to trust the environment at scale and once you lose

961
00:51:13,500 --> 00:51:17,340
trust in visibility you lose speed in decision making.

962
00:51:17,340 --> 00:51:21,260
The third issue was audit readiness when audit work arrived the preparation cycle took

963
00:51:21,260 --> 00:51:26,580
about five weeks not because nobody cared but because evidence lived across too many places

964
00:51:26,580 --> 00:51:29,780
and ownership had to be clarified repeatedly.

965
00:51:29,780 --> 00:51:33,940
The tenant could eventually explain itself but only after a lot of chasing and this is where

966
00:51:33,940 --> 00:51:38,140
the business tension regarding co-pilot became obvious leadership wanted AI benefits and

967
00:51:38,140 --> 00:51:42,460
some users were already experimenting but trust was low and usage was inconsistent because

968
00:51:42,460 --> 00:51:47,100
nobody could confidently say the underlying information environment was governed enough

969
00:51:47,100 --> 00:51:49,220
to support broad adoption.

970
00:51:49,220 --> 00:51:53,100
So the organization had exactly the shape I see all the time in that middle zone where

971
00:51:53,100 --> 00:51:58,740
there is plenty of effort and plenty of governance language but not enough structural reliability.

972
00:51:58,740 --> 00:52:02,820
The shift did not come from adding a giant new policy package and it certainly did not come

973
00:52:02,820 --> 00:52:04,380
from another awareness campaign.

974
00:52:04,380 --> 00:52:06,860
Instead the shift came from three operating changes.

975
00:52:06,860 --> 00:52:10,340
First ownership was made executable rather than just being a name on a list.

976
00:52:10,340 --> 00:52:15,300
They clarified who owned what across critical workspaces and data sets then tied that ownership

977
00:52:15,300 --> 00:52:20,460
into review escalation and life cycle expectations so responsibility stopped being implied and

978
00:52:20,460 --> 00:52:22,540
started becoming operational.

979
00:52:22,540 --> 00:52:26,820
Second governance automation was introduced in the places where behavior still depended

980
00:52:26,820 --> 00:52:30,980
on memory such as review flows approval paths and evidence capture.

981
00:52:30,980 --> 00:52:34,700
Some of this can be supported through power automate patterns when it is governed properly

982
00:52:34,700 --> 00:52:38,140
and that matters because the point is not automation for its own sake.

983
00:52:38,140 --> 00:52:43,300
The point is removing avoidable dependence on follow up reminders and spreadsheet coordination.

984
00:52:43,300 --> 00:52:46,580
Third they chose a small set of measurable KPIs and actually use them.

985
00:52:46,580 --> 00:52:50,660
These weren't vanity metrics but operating metrics like audit preparation time, label

986
00:52:50,660 --> 00:52:53,740
coverage, review completion and remediation speed.

987
00:52:53,740 --> 00:52:57,420
Those metrics created one source of reality the business could act on and this is where

988
00:52:57,420 --> 00:53:00,140
the change really clicked for everyone involved.

989
00:53:00,140 --> 00:53:03,620
Once the organization could see behavior clearly governance became easier to tune without

990
00:53:03,620 --> 00:53:04,620
a constant argument.

991
00:53:04,620 --> 00:53:06,220
Now let's look at the after state.

992
00:53:06,220 --> 00:53:10,180
The preparation dropped from roughly five weeks to around one to two weeks which is a major

993
00:53:10,180 --> 00:53:15,060
operational shift for every team that used to get pulled into reconstruction work.

994
00:53:15,060 --> 00:53:21,100
Label coverage moved from around 25% to somewhere in the 70 to 85% range across the prioritized

995
00:53:21,100 --> 00:53:22,900
the state while that isn't perfect.

996
00:53:22,900 --> 00:53:27,220
It is strong enough to support more confident protection, better reporting and more explainable

997
00:53:27,220 --> 00:53:28,220
decisions.

998
00:53:28,220 --> 00:53:32,700
Copilot trust improved too not because people were forced to like AI but because the environment

999
00:53:32,700 --> 00:53:35,100
underneath it became more governable.

1000
00:53:35,100 --> 00:53:39,860
Reports became easier to trust and roll out discussions became less emotional and more evidence

1001
00:53:39,860 --> 00:53:40,860
based.

1002
00:53:40,860 --> 00:53:45,100
Adoption increased because usefulness and risk became more explainable at the same time.

1003
00:53:45,100 --> 00:53:48,060
From a business perspective that created three outcomes.

1004
00:53:48,060 --> 00:53:52,940
Faster decision cycles, lower compliance friction and clearer operational accountability.

1005
00:53:52,940 --> 00:53:54,340
That's the real lesson in this case.

1006
00:53:54,340 --> 00:53:58,220
The organization did not become more mature because it installed more control volume but

1007
00:53:58,220 --> 00:54:00,500
because it made control consistent and measurable.

1008
00:54:00,500 --> 00:54:01,900
That is the formula for success.

1009
00:54:01,900 --> 00:54:04,820
It isn't about more noise, it's about more predictability.

1010
00:54:04,820 --> 00:54:08,220
And once you see that in one tenant you start seeing the same pattern everywhere.

1011
00:54:08,220 --> 00:54:11,340
How to assess realistically without maturity theatre.

1012
00:54:11,340 --> 00:54:15,380
So if you want to assess your organization honestly the first rule is simple.

1013
00:54:15,380 --> 00:54:16,580
Start with observed behavior.

1014
00:54:16,580 --> 00:54:20,620
Don't look at declared intention, policy ambition or what the steering group believes

1015
00:54:20,620 --> 00:54:21,620
should be true.

1016
00:54:21,620 --> 00:54:24,900
Look at what actually happens in the tenant under normal conditions.

1017
00:54:24,900 --> 00:54:28,620
Maturity theatre begins the moment we confuse implementation without come assuming

1018
00:54:28,620 --> 00:54:32,860
that because of control exists it must work or because of policy was published.

1019
00:54:32,860 --> 00:54:34,220
Behavior must have changed.

1020
00:54:34,220 --> 00:54:37,100
That logic is exactly how organizations overrate themselves.

1021
00:54:37,100 --> 00:54:39,340
What you want instead is proof from operation.

1022
00:54:39,340 --> 00:54:42,860
Look at audit behavior and ask how long evidence really takes to produce.

1023
00:54:42,860 --> 00:54:46,900
Then look at exposure behavior to see where sensitive content is broadly accessible or poorly

1024
00:54:46,900 --> 00:54:47,900
owned.

1025
00:54:47,900 --> 00:54:52,020
Look at control behavior to see if reviews are completed on time or only after escalation

1026
00:54:52,020 --> 00:54:53,020
and chasing.

1027
00:54:53,020 --> 00:54:56,780
Finally look at exception behavior to see if exceptions are rooted through a visible path

1028
00:54:56,780 --> 00:54:59,100
or handled through side channels and memory.

1029
00:54:59,100 --> 00:55:03,260
Those signals tell the truth much faster than maturity language ever will.

1030
00:55:03,260 --> 00:55:08,060
The second rule is to assess by workload and process not by tenant wide averages alone.

1031
00:55:08,060 --> 00:55:11,780
This is where a lot of organizations accidentally flatter themselves by saying they have labeling

1032
00:55:11,780 --> 00:55:15,660
or access reviews without specifying where or across what scope.

1033
00:55:15,660 --> 00:55:19,500
Average is hide unevenness and unevenness is usually the real maturity story.

1034
00:55:19,500 --> 00:55:23,860
You may have level 400 behavior in a legal environment and level 200 behavior in a sprawling

1035
00:55:23,860 --> 00:55:28,180
collaboration estate and since both can exist in the same tenant at the same time assessing

1036
00:55:28,180 --> 00:55:32,300
only at the top line means you miss the places where fragility is still driving business

1037
00:55:32,300 --> 00:55:33,540
risk.

1038
00:55:33,540 --> 00:55:37,660
The third rule is to separate implemented controls from adopted controls and this one matters

1039
00:55:37,660 --> 00:55:38,660
a lot.

1040
00:55:38,660 --> 00:55:42,860
A control that is technically available but weakly used is not mature just as a label published

1041
00:55:42,860 --> 00:55:46,540
to the tenant is not the same as a label used with measurable coverage.

1042
00:55:46,540 --> 00:55:50,660
A review capability enabled in the platform is not the same as a review discipline the business

1043
00:55:50,660 --> 00:55:51,660
actually follows.

1044
00:55:51,660 --> 00:55:56,740
A DLP rule configured in a portal is not the same as a DLP operating model that people trust,

1045
00:55:56,740 --> 00:55:58,020
monitor and tune.

1046
00:55:58,020 --> 00:56:00,980
Maturity lives in adoption plus measureability not deployment alone.

1047
00:56:00,980 --> 00:56:04,900
That's why I always push teams to gather evidence from lived operations and use what the

1048
00:56:04,900 --> 00:56:06,540
tenant can already reveal.

1049
00:56:06,540 --> 00:56:12,460
Audit timelines, label coverage, access review completion rates, remediation speed, exception

1050
00:56:12,460 --> 00:56:15,740
volumes and ownership clarity are all hard to fake for long.

1051
00:56:15,740 --> 00:56:20,260
They force better conversations than broad statements like we're doing pretty well.

1052
00:56:20,260 --> 00:56:24,140
The fourth rule is to include business owners not just IT and compliance.

1053
00:56:24,140 --> 00:56:28,900
This is where many assessments become structurally biased because IT sees what was configured

1054
00:56:28,900 --> 00:56:31,380
while compliance sees what was documented.

1055
00:56:31,380 --> 00:56:35,380
Security sees what was escalated but business owners see where the process breaks normal work.

1056
00:56:35,380 --> 00:56:39,060
They know where ownership is real and where it is performative and they know where teams

1057
00:56:39,060 --> 00:56:43,340
root around governance because the approved path is too slow or too disconnected from delivery

1058
00:56:43,340 --> 00:56:44,340
pressure.

1059
00:56:44,340 --> 00:56:47,820
If you leave them out you get a technically neat assessment and an operationally incomplete

1060
00:56:47,820 --> 00:56:51,500
one and the gap between those two is where maturity theatre survives.

1061
00:56:51,500 --> 00:56:56,220
The fifth rule is to look explicitly for drift, inconsistency and human dependency rather

1062
00:56:56,220 --> 00:56:58,380
than just searching for strengths.

1063
00:56:58,380 --> 00:57:02,460
Workwear outcome still depends on one experienced admin or where one manager quietly approves

1064
00:57:02,460 --> 00:57:04,540
what the formal path cannot handle.

1065
00:57:04,540 --> 00:57:08,780
Ask where a metric exists but is not trusted enough to guide action or where process quality

1066
00:57:08,780 --> 00:57:11,500
changes depending on team, geography or workload.

1067
00:57:11,500 --> 00:57:14,060
That is the structural truth of your organization.

1068
00:57:14,060 --> 00:57:17,140
Maturity is not what works when the right people are available.

1069
00:57:17,140 --> 00:57:21,340
It is what still works when pressure rises, staff changes or scale increases.

1070
00:57:21,340 --> 00:57:25,260
So if I were assessing an organization tomorrow morning, I would not begin by asking how

1071
00:57:25,260 --> 00:57:27,220
advanced the governance framework sounds.

1072
00:57:27,220 --> 00:57:30,780
I would ask where behavior is still being held together by compensation because that is

1073
00:57:30,780 --> 00:57:33,860
usually where the next upgrade path becomes obvious.

1074
00:57:33,860 --> 00:57:37,700
And once you know your real level, the next move is not to boil the ocean.

1075
00:57:37,700 --> 00:57:40,060
The upgrade path from 100 to 200.

1076
00:57:40,060 --> 00:57:42,220
Let's get practical about how you actually move the needle.

1077
00:57:42,220 --> 00:57:45,820
If your environment is sitting at level 100, your first instinct might be to go shopping

1078
00:57:45,820 --> 00:57:49,900
for advanced governance tools but that is exactly the wrong move to make before you've

1079
00:57:49,900 --> 00:57:53,100
built the minimum structure that makes governance possible.

1080
00:57:53,100 --> 00:57:58,060
The needle 100 doesn't suffer because it lacks sophistication, it suffers because it lacks a foundation.

1081
00:57:58,060 --> 00:58:01,900
At this stage, the environment is reactive, ownership is fuzzy and evidence is scattered

1082
00:58:01,900 --> 00:58:03,300
across the organization.

1083
00:58:03,300 --> 00:58:06,860
Most controls only appear after something goes wrong which means your first move isn't

1084
00:58:06,860 --> 00:58:08,900
optimization, it's stabilization.

1085
00:58:08,900 --> 00:58:11,900
You are trying to move from ad hoc behavior to manage behavior.

1086
00:58:11,900 --> 00:58:14,140
And that requires four specific things to happen first.

1087
00:58:14,140 --> 00:58:18,100
You need ownership, visibility, basic operating parts and a reduction in single points of

1088
00:58:18,100 --> 00:58:19,100
failure.

1089
00:58:19,100 --> 00:58:23,060
Start with ownership because this is always the first structural repair you need to make.

1090
00:58:23,060 --> 00:58:26,820
Without clear owners, every other control becomes harder to execute.

1091
00:58:26,820 --> 00:58:31,180
So you need to know exactly who is accountable for critical workspaces and sensitive data

1092
00:58:31,180 --> 00:58:32,180
sets right now.

1093
00:58:32,180 --> 00:58:35,980
Keep this simple and don't worry about building a beautiful enterprise wide-racy matrix

1094
00:58:35,980 --> 00:58:36,980
before you begin.

1095
00:58:36,980 --> 00:58:41,020
You just need enough clarity so that when a risk issue appears or a review is due, the

1096
00:58:41,020 --> 00:58:45,100
organization doesn't lose time debating who actually owns the problem.

1097
00:58:45,100 --> 00:58:46,700
Next we have to talk about visibility.

1098
00:58:46,700 --> 00:58:51,100
Level 100 organizations usually don't have a reliable view of where sensitive data sits

1099
00:58:51,100 --> 00:58:52,860
or which workspaces matter most.

1100
00:58:52,860 --> 00:58:56,740
So the goal here is baseline visibility rather than perfect discovery.

1101
00:58:56,740 --> 00:59:01,220
You need to ask which teams matter, which SharePoint sites carry business risk and where your

1102
00:59:01,220 --> 00:59:03,020
audit evidence currently lives.

1103
00:59:03,020 --> 00:59:06,620
You are trying to replace guessing with a working picture and even if that picture is in

1104
00:59:06,620 --> 00:59:10,700
completed first, partial visibility with clear next actions is still much better than

1105
00:59:10,700 --> 00:59:12,020
false confidence.

1106
00:59:12,020 --> 00:59:15,860
The third move is creating minimum operating parts for the organization.

1107
00:59:15,860 --> 00:59:19,100
This is where a lot of level 100 environments keep failing because they have no reliable

1108
00:59:19,100 --> 00:59:21,860
route for incidents, reviews or access questions.

1109
00:59:21,860 --> 00:59:25,820
Everything becomes a custom response where someone emails a colleague, someone else sends

1110
00:59:25,820 --> 00:59:29,620
a slack message and the whole process relies on someone remembering what happened last

1111
00:59:29,620 --> 00:59:30,620
time.

1112
00:59:30,620 --> 00:59:35,020
That cannot scale so you must define a basic path for how issues get raised, who reviews

1113
00:59:35,020 --> 00:59:37,180
them and where the final decision is recorded.

1114
00:59:37,180 --> 00:59:39,900
It doesn't need to be elegant yet, it just needs to be repeatable.

1115
00:59:39,900 --> 00:59:43,260
Finally you have to reduce your obvious single points of failure.

1116
00:59:43,260 --> 00:59:47,060
This clicked for me when I kept seeing the same pattern in low maturity tenants where one

1117
00:59:47,060 --> 00:59:51,380
admin knew the evidence location and one manager knew the history of every workaround.

1118
00:59:51,380 --> 00:59:55,380
That isn't resilience, it's concentration risk and it puts the entire system in jeopardy

1119
00:59:55,380 --> 00:59:57,780
if one person is unavailable.

1120
00:59:57,780 --> 01:00:00,940
Document your key paths, share ownership where it's needed and make sure basic decisions

1121
01:00:00,940 --> 01:00:03,300
don't disappear into individual inboxes.

1122
01:00:03,300 --> 01:00:07,100
I have one warning for you, do not over engineer level 100.

1123
01:00:07,100 --> 01:00:11,060
Many teams waste six months building frameworks they aren't ready to operate so instead of

1124
01:00:11,060 --> 01:00:15,500
asking what a level 400 model looks like, ask what the smallest structural changes that

1125
01:00:15,500 --> 01:00:17,580
remove reactivity this month.

1126
01:00:17,580 --> 01:00:21,940
Maybe you assign owners to your top 20 risky workspaces or define a single exception path.

1127
01:00:21,940 --> 01:00:25,980
That is enough to begin because the goal of moving to level 200 is to create a tenant that

1128
01:00:25,980 --> 01:00:30,500
responds in a managed way with less improvisation and less hidden labour.

1129
01:00:30,500 --> 01:00:35,900
The upgrade path from 200 to 300, if level 100 is about stopping reactivity, level 200 is

1130
01:00:35,900 --> 01:00:37,580
where a different kind of problem shows up.

1131
01:00:37,580 --> 01:00:41,820
The organization has started doing the right things, policies exist and roles are partly

1132
01:00:41,820 --> 01:00:45,340
defined which can make things look like real maturity from the outside.

1133
01:00:45,340 --> 01:00:50,340
And here is the thing at level 200, governance is often managed but still incredibly fragile.

1134
01:00:50,340 --> 01:00:53,660
It works because people are paying attention and sending manual reminders but that doesn't

1135
01:00:53,660 --> 01:00:55,060
scale well over time.

1136
01:00:55,060 --> 01:00:59,580
The move from 200 to 300 isn't about adding more governance noise, it's about turning

1137
01:00:59,580 --> 01:01:02,860
scattered good practice into a defined operating model.

1138
01:01:02,860 --> 01:01:06,340
That starts with practical standardization across teams, SharePoint and OneDrive.

1139
01:01:06,340 --> 01:01:11,300
You need the same logic to show up in how workspaces are requested and how life cycle decisions

1140
01:01:11,300 --> 01:01:16,980
are recorded because when each workload evolves its own local habits, the tenant becomes uneven.

1141
01:01:16,980 --> 01:01:21,380
Uneveness is expensive, it creates policy drift and it leaves everyone confused about what

1142
01:01:21,380 --> 01:01:22,820
the real rules are actually are.

1143
01:01:22,820 --> 01:01:27,180
The next move is to move away from spreadsheet governance and email based approvals.

1144
01:01:27,180 --> 01:01:31,260
This is one of the clearest signs that a level 200 environment is still compensating for

1145
01:01:31,260 --> 01:01:32,340
structural gaps.

1146
01:01:32,340 --> 01:01:35,460
While critical decisions are technically being made, they are happening in ways that are

1147
01:01:35,460 --> 01:01:37,820
hard to trace and nearly impossible to repeat.

1148
01:01:37,820 --> 01:01:42,780
An approval buried in an inbox is not a durable control model and a review status sitting

1149
01:01:42,780 --> 01:01:45,540
in a private spreadsheet is not a reliable operating record.

1150
01:01:45,540 --> 01:01:50,460
To fix this you need to pull core governance activity into repeatable visible paths like

1151
01:01:50,460 --> 01:01:53,060
structured lists and shared evidence locations.

1152
01:01:53,060 --> 01:01:56,540
If you can't see the process you can't really govern the process and that leads directly

1153
01:01:56,540 --> 01:01:58,660
into tightening your role boundaries.

1154
01:01:58,660 --> 01:02:03,100
Level 200 organizations often have roles but they aren't clean which leads to IT doing

1155
01:02:03,100 --> 01:02:08,100
business ownership work while security becomes the escalation point for minor issues.

1156
01:02:08,100 --> 01:02:13,540
From 200 to 300 you must clarify who decides, who executes and who only gets informed.

1157
01:02:13,540 --> 01:02:17,780
This is also the point where metrics need to become visible enough to challenge your internal

1158
01:02:17,780 --> 01:02:18,780
assumptions.

1159
01:02:18,780 --> 01:02:22,540
These don't need to be polished for executive theatre they just need to be useful enough

1160
01:02:22,540 --> 01:02:27,220
to expose where your label policies are lagging or where access review completion rates

1161
01:02:27,220 --> 01:02:28,220
are weak.

1162
01:02:28,220 --> 01:02:32,500
Once these weak spots become visible they stop hiding inside confident language and you

1163
01:02:32,500 --> 01:02:34,940
can finally address the reality of the system.

1164
01:02:34,940 --> 01:02:38,780
You also need to simplify your policy estate while you standardize your process estate.

1165
01:02:38,780 --> 01:02:43,340
If you still have overlapping guidance and locally interpreted rules, level 300 will stay

1166
01:02:43,340 --> 01:02:44,860
out of reach for your team.

1167
01:02:44,860 --> 01:02:48,660
The organization needs clearer language and stronger alignment between what the policy

1168
01:02:48,660 --> 01:02:51,100
says and how the operation actually functions.

1169
01:02:51,100 --> 01:02:54,500
That is what makes governance definable rather than just documented.

1170
01:02:54,500 --> 01:02:59,460
The upgrade path from 200 to 300 is really about taking what works in small pockets and

1171
01:02:59,460 --> 01:03:01,860
making it coherent across the entire environment.

1172
01:03:01,860 --> 01:03:06,820
You want less tribal memory, less inbox governance and more shared patterns that everyone can follow.

1173
01:03:06,820 --> 01:03:10,660
Once you do that the tenant becomes easier to understand and much easier to improve though

1174
01:03:10,660 --> 01:03:13,340
level 300 still has one big weakness.

1175
01:03:13,340 --> 01:03:16,700
Under extreme pressure it can still break and that usually happens because automation

1176
01:03:16,700 --> 01:03:18,700
is still missing from the equation.

1177
01:03:18,700 --> 01:03:21,380
The upgrade path from 300 to 400.

1178
01:03:21,380 --> 01:03:25,060
This is the specific point where a lot of organizations find themselves stuck and it

1179
01:03:25,060 --> 01:03:28,180
happens because level 300 feels respectable enough to stop.

1180
01:03:28,180 --> 01:03:31,580
You have a framework in place, the documents are written and your dashboards are actually

1181
01:03:31,580 --> 01:03:35,660
populated with data, people can point to their governance and say they are doing the work

1182
01:03:35,660 --> 01:03:37,380
and in many cases they really are.

1183
01:03:37,380 --> 01:03:41,620
But when the pressure starts to mount, level 300 reveals a structural weakness because too

1184
01:03:41,620 --> 01:03:45,980
much of the system still depends on human memory and individual goodwill.

1185
01:03:45,980 --> 01:03:50,660
Moving from 300 to 400 is the hardest and most valuable transition in this entire model

1186
01:03:50,660 --> 01:03:53,580
because you are no longer just trying to define governance.

1187
01:03:53,580 --> 01:03:57,820
You are trying to make it predictable, which means the environment behaves reliably even

1188
01:03:57,820 --> 01:04:00,020
when people are busy or staff members leave.

1189
01:04:00,020 --> 01:04:04,500
This reliability is what allows the system to hold steady when audits arrive or when AI

1190
01:04:04,500 --> 01:04:06,140
adoption begins to speed up.

1191
01:04:06,140 --> 01:04:07,620
Your first move is simple.

1192
01:04:07,620 --> 01:04:11,140
Find every single place where a control still depends on someone remembering to do their

1193
01:04:11,140 --> 01:04:12,140
job.

1194
01:04:12,140 --> 01:04:16,100
This includes remembering to review access, chasing down a workspace owner or capturing

1195
01:04:16,100 --> 01:04:18,140
the evidence needed for a future audit.

1196
01:04:18,140 --> 01:04:22,260
If a control only works when the right person remembers to act at the right time, your system

1197
01:04:22,260 --> 01:04:23,260
is fragile.

1198
01:04:23,260 --> 01:04:26,860
It might be well documented on paper but it is not yet dependable in practice.

1199
01:04:26,860 --> 01:04:31,700
This is where automation starts to matter in a very practical way, not because it sounds

1200
01:04:31,700 --> 01:04:35,020
modern but because it removes the burden of human recall.

1201
01:04:35,020 --> 01:04:39,060
You need to implement life cycle actions, review triggers and structured approval flows

1202
01:04:39,060 --> 01:04:41,380
that function without manual intervention.

1203
01:04:41,380 --> 01:04:45,420
When you use tools like Power Automate to support this shift, the automation itself must

1204
01:04:45,420 --> 01:04:49,620
be governed properly otherwise you are just creating a new layer of digital chaos.

1205
01:04:49,620 --> 01:04:53,900
The second move is making ownership executable rather than just theoretical.

1206
01:04:53,900 --> 01:04:58,540
At level 300, ownership is often defined in the PDF but isn't wired into the daily operational

1207
01:04:58,540 --> 01:04:59,980
flow of the business.

1208
01:04:59,980 --> 01:05:04,940
People are named as owners but the consequences for inaction are weak and reviews aren't tightly

1209
01:05:04,940 --> 01:05:06,380
linked to system behavior.

1210
01:05:06,380 --> 01:05:11,060
You need to connect ownership to what the tenant actually does so if a workspace becomes inactive

1211
01:05:11,060 --> 01:05:15,380
or reviews over you, the system knows exactly who to escalate to for remediation.

1212
01:05:15,380 --> 01:05:19,580
Once that logic clicks, governance stops floating above the tenant and starts shaping the

1213
01:05:19,580 --> 01:05:20,580
actual work.

1214
01:05:20,580 --> 01:05:24,340
The third move involves integrating your entire control stack so that per view, labeling

1215
01:05:24,340 --> 01:05:27,660
and life cycle controls operate as one cohesive model.

1216
01:05:27,660 --> 01:05:31,500
If your labeling says one thing while your DLP enforces another, you don't have control

1217
01:05:31,500 --> 01:05:32,500
coherence.

1218
01:05:32,500 --> 01:05:33,820
You just have a collection of fragments.

1219
01:05:33,820 --> 01:05:38,540
Level 400 is where those fragments align so that sensitive content is visible and protection

1220
01:05:38,540 --> 01:05:40,740
becomes measurable across the board.

1221
01:05:40,740 --> 01:05:44,420
None of this matters if your metrics are just used for reporting theatre so the fourth

1222
01:05:44,420 --> 01:05:48,220
move is defining outcome KPIs that challenge your operating model.

1223
01:05:48,220 --> 01:05:52,700
Instead of looking at vanity charts, you should focus on audit preparation time, label coverage

1224
01:05:52,700 --> 01:05:54,740
and the age of your existing exceptions.

1225
01:05:54,740 --> 01:05:58,300
If it still takes weeks to prepare for an audit, it means something structural is weak and

1226
01:05:58,300 --> 01:06:00,140
your governance is still leaking effort.

1227
01:06:00,140 --> 01:06:03,820
At this level, governance is no longer hidden inside the IT department because it has

1228
01:06:03,820 --> 01:06:06,140
finally entered the business rhythm.

1229
01:06:06,140 --> 01:06:09,380
Leadership sees the signals and understands the trade-offs, treating governance as a core

1230
01:06:09,380 --> 01:06:12,500
operational capability rather than a compliant side project.

1231
01:06:12,500 --> 01:06:16,900
However, you must be careful not to automate broken logic because automating a messy process

1232
01:06:16,900 --> 01:06:19,820
or a vague policy only serves to accelerate confusion.

1233
01:06:19,820 --> 01:06:24,500
You have to simplify the process and clarify ownership before you ever trigger the automation.

1234
01:06:24,500 --> 01:06:29,540
Ultimately, the move from 300 to 400 takes governance out of the world of documentation

1235
01:06:29,540 --> 01:06:32,460
and places it into the world of dependable behaviour.

1236
01:06:32,460 --> 01:06:36,260
You will find yourself doing less follow-up and fewer heroics because the system relies

1237
01:06:36,260 --> 01:06:39,500
on defaults and signals rather than manual reconstruction.

1238
01:06:39,500 --> 01:06:43,180
When this happens, governance stops feeling like attacks on the business and starts behaving

1239
01:06:43,180 --> 01:06:45,140
like essential infrastructure.

1240
01:06:45,140 --> 01:06:47,460
The upgrade path from 400 to 500.

1241
01:06:47,460 --> 01:06:51,460
If level 400 is where your governance becomes dependable, level 500 is where it becomes

1242
01:06:51,460 --> 01:06:52,460
truly adaptive.

1243
01:06:52,460 --> 01:06:56,980
This is a vital distinction to make because many teams think the goal is to reach predictability

1244
01:06:56,980 --> 01:06:58,500
and stay there forever.

1245
01:06:58,500 --> 01:07:02,940
In the world of Microsoft 365, a static posture won't hold for long because the platform,

1246
01:07:02,940 --> 01:07:06,220
the business patterns and the AI capabilities are always in motion.

1247
01:07:06,220 --> 01:07:10,580
The moment your governance model stops evolving, your maturity starts to decay.

1248
01:07:10,580 --> 01:07:13,140
Level 500 isn't just about having better controls.

1249
01:07:13,140 --> 01:07:16,340
It is about establishing a discipline of continuous tuning.

1250
01:07:16,340 --> 01:07:21,020
At this stage, the organization accepts that governance is never a finished state, so controls

1251
01:07:21,020 --> 01:07:24,460
are constantly reviewed and benchmarked against the changing environment.

1252
01:07:24,460 --> 01:07:28,780
The first move in this transition is reaching feedback maturity, where your metrics actually

1253
01:07:28,780 --> 01:07:30,620
drive the redesign of your systems.

1254
01:07:30,620 --> 01:07:35,060
At level 400 you have the data, but at level 500 that data forces you to investigate and

1255
01:07:35,060 --> 01:07:36,940
correct the root causes of friction.

1256
01:07:36,940 --> 01:07:40,980
If label coverage stalls or exceptions cluster around a specific workflow, that workflow

1257
01:07:40,980 --> 01:07:43,540
becomes the immediate object of improvement.

1258
01:07:43,540 --> 01:07:47,620
Metrics stop being used as mere proof of management and start becoming the primary inputs for

1259
01:07:47,620 --> 01:07:49,140
system optimization.

1260
01:07:49,140 --> 01:07:53,380
The second move is aligning your controls to the actual risk appetite of the business.

1261
01:07:53,380 --> 01:07:56,780
Mature organizations separate themselves here because their controls aren't built from

1262
01:07:56,780 --> 01:07:59,820
fear or inherited standards that no longer apply.

1263
01:07:59,820 --> 01:08:03,660
Instead, leadership understands which risks they will reduce aggressively and where they

1264
01:08:03,660 --> 01:08:05,900
need flexibility to maintain speed.

1265
01:08:05,900 --> 01:08:09,700
This calibrated approach scales much better than blanket controls because it reflects

1266
01:08:09,700 --> 01:08:12,500
how the people inside the system actually operate.

1267
01:08:12,500 --> 01:08:16,940
Third, you need to implement independent validation to challenge your own internal confidence.

1268
01:08:16,940 --> 01:08:21,500
A level 400 team might trust its own dashboards, but a level 500 organization looks for external

1269
01:08:21,500 --> 01:08:26,020
benchmarking or formal reviews against standards like ISO 27001.

1270
01:08:26,020 --> 01:08:29,380
This is important because mature systems don't just generate evidence.

1271
01:08:29,380 --> 01:08:32,700
They constantly test whether that evidence still means what they think it means.

1272
01:08:32,700 --> 01:08:36,060
This practice protects the organization against maturity theatre, where everything looks

1273
01:08:36,060 --> 01:08:38,580
good on the surface but is failing underneath.

1274
01:08:38,580 --> 01:08:43,180
The fourth move involves using AI and automation to improve the governance model itself,

1275
01:08:43,180 --> 01:08:45,140
rather than just monitoring the users.

1276
01:08:45,140 --> 01:08:49,460
You can use these tools for drift detection, life cycle tuning and trend analysis to sharpen

1277
01:08:49,460 --> 01:08:50,900
your decision support.

1278
01:08:50,900 --> 01:08:55,700
By using AI upstream to detect patterns earlier and downstream to reduce remediation time,

1279
01:08:55,700 --> 01:08:58,180
you create a system that learns from its own environment.

1280
01:08:58,180 --> 01:09:03,180
It is worth noting that very few organizations live at this level across their entire tenant.

1281
01:09:03,180 --> 01:09:07,340
You might see pockets of level 500 behavior in a highly regulated business unit or a mature

1282
01:09:07,340 --> 01:09:10,620
records environment, but whole tenant optimization is rare.

1283
01:09:10,620 --> 01:09:14,760
It requires a level of sustained discipline and executive clarity that most companies

1284
01:09:14,760 --> 01:09:17,100
struggle to maintain over long periods.

1285
01:09:17,100 --> 01:09:20,500
The useful question for leadership is not whether the system is perfect but rather

1286
01:09:20,500 --> 01:09:24,380
where the system is predictable and where it still needs deliberate optimization.

1287
01:09:24,380 --> 01:09:29,060
This is the executive lens that views maturity as a matter of resilience over time.

1288
01:09:29,060 --> 01:09:33,060
The system keeps learning and the controls keep adjusting so the business can move forward

1289
01:09:33,060 --> 01:09:34,860
without governance falling behind.

1290
01:09:34,860 --> 01:09:38,460
When you see maturity this way you stop asking if the work is done and start asking if the

1291
01:09:38,460 --> 01:09:41,220
system is still fit for the next version of the business.

1292
01:09:41,220 --> 01:09:43,740
Why maturity is really about business reality?

1293
01:09:43,740 --> 01:09:47,100
This is the core message behind the entire model and it's something I've been thinking

1294
01:09:47,100 --> 01:09:48,420
a lot about lately.

1295
01:09:48,420 --> 01:09:53,760
M365GRC maturity isn't just a compliance story that happens to have a few business side

1296
01:09:53,760 --> 01:09:57,660
effects, but it's actually a business reality story that uses compliance as its primary

1297
01:09:57,660 --> 01:09:58,660
way of speaking.

1298
01:09:58,660 --> 01:10:03,780
If you look closely at what changed across these 510s the pattern is remarkably consistent,

1299
01:10:03,780 --> 01:10:08,580
as maturity improved audit preparation got faster exposure became smaller and easier to explain

1300
01:10:08,580 --> 01:10:12,020
and co-pilot became more useful because people actually trusted it to scale.

1301
01:10:12,020 --> 01:10:15,940
These aren't just abstract wins for a governance committee, they are fundamental wins for

1302
01:10:15,940 --> 01:10:17,660
the entire operating model.

1303
01:10:17,660 --> 01:10:22,380
That matters because many leadership teams still put governance into the category of necessary

1304
01:10:22,380 --> 01:10:23,380
overhead.

1305
01:10:23,380 --> 01:10:27,940
They see it as important and required but they also see it as something separate from performance,

1306
01:10:27,940 --> 01:10:29,780
speed and how the business actually moves.

1307
01:10:29,780 --> 01:10:34,060
I don't think that old model survives contact with reality anymore, especially not in a world

1308
01:10:34,060 --> 01:10:37,540
where data is spread across teams, sharepoint and one drive.

1309
01:10:37,540 --> 01:10:41,980
When AI moves through those layers it reveals every structural weakness much faster than

1310
01:10:41,980 --> 01:10:43,540
we've ever seen before.

1311
01:10:43,540 --> 01:10:47,660
Once governance starts shaping your audit speed, your exposure risk and how useful your AI

1312
01:10:47,660 --> 01:10:50,580
tools are, it is no longer a peripheral concern.

1313
01:10:50,580 --> 01:10:54,900
It is infrastructure and infrastructure changes, business outcomes, whether leadership chooses

1314
01:10:54,900 --> 01:10:56,700
to acknowledge that or not.

1315
01:10:56,700 --> 01:11:00,660
That is why I keep coming back to the idea of predictable behavior rather than just

1316
01:11:00,660 --> 01:11:03,220
installed controls or declared maturity levels.

1317
01:11:03,220 --> 01:11:05,020
We have to ask the hard questions.

1318
01:11:05,020 --> 01:11:09,660
Can the tenant produce evidence without a week of drama and can it reduce exposure by design

1319
01:11:09,660 --> 01:11:12,100
instead of relying on constant reminders?

1320
01:11:12,100 --> 01:11:16,500
If the system can support AI in a way that leadership can actually defend and scale,

1321
01:11:16,500 --> 01:11:17,500
then your maturity is real.

1322
01:11:17,500 --> 01:11:21,340
If the answer is no, then your system is still compensating and the business always pays

1323
01:11:21,340 --> 01:11:23,420
for that compensation in very familiar ways.

1324
01:11:23,420 --> 01:11:28,020
You see it in slower decisions, higher friction, longer audits and a constant stream of exceptions

1325
01:11:28,020 --> 01:11:31,180
that eventually lead to lower trust and uneven adoption.

1326
01:11:31,180 --> 01:11:35,060
That is the practical cost of low maturity, which isn't just a theoretical risk but a very

1327
01:11:35,060 --> 01:11:36,700
real form of operational drag.

1328
01:11:36,700 --> 01:11:40,060
I don't frame these poor outcomes as bad luck because if your audit takes six weeks

1329
01:11:40,060 --> 01:11:43,900
to complete that isn't a random event when no one can explain why sensitive content

1330
01:11:43,900 --> 01:11:48,340
is broadly exposed or when co-pilot adoption stalls because nobody trusts the information

1331
01:11:48,340 --> 01:11:50,140
environment, those aren't accidents.

1332
01:11:50,140 --> 01:11:54,940
It is architecture, process design and ownership design all working together to produce a specific

1333
01:11:54,940 --> 01:11:55,940
result.

1334
01:11:55,940 --> 01:11:59,580
The system is doing exactly what it was set up to do, which means better outcomes won't

1335
01:11:59,580 --> 01:12:01,420
come from a sense of urgency alone.

1336
01:12:01,420 --> 01:12:05,140
They come from a structural redesign and that should actually be seen as good news for

1337
01:12:05,140 --> 01:12:07,540
leaders who are tired of fighting the same fires.

1338
01:12:07,540 --> 01:12:11,460
Once you stop treating these as disconnected pain points, you can start governing them as

1339
01:12:11,460 --> 01:12:13,780
a single cohesive operating model.

1340
01:12:13,780 --> 01:12:18,500
Audit readiness is not separate from who owns the data and exposure reduction is not separate

1341
01:12:18,500 --> 01:12:20,380
from how your workspaces are designed.

1342
01:12:20,380 --> 01:12:24,900
AI usefulness depends entirely on your labeling, your lifecycle management and your evidence

1343
01:12:24,900 --> 01:12:28,260
discipline because these are all connected expressions of maturity.

1344
01:12:28,260 --> 01:12:33,220
That is the perspective shift we need and from a 510 in view, the proof points stay remarkably

1345
01:12:33,220 --> 01:12:34,220
stable.

1346
01:12:34,220 --> 01:12:38,540
First, your audit time will reveal your true maturity faster than any slide deck ever could.

1347
01:12:38,540 --> 01:12:42,460
If it still takes weeks to reconstruct evidence, your operating model is telling you something

1348
01:12:42,460 --> 01:12:44,780
important about its own fragility.

1349
01:12:44,780 --> 01:12:49,240
Second, exposure is almost always a design problem, so if oversharing keeps happening, you should

1350
01:12:49,240 --> 01:12:53,100
look at your defaults and inheritance before launching another awareness campaign.

1351
01:12:53,100 --> 01:12:57,920
Third, co-pilot readiness depends on GRC maturity far more than most organizations want to

1352
01:12:57,920 --> 01:12:58,920
admit.

1353
01:12:58,920 --> 01:13:02,980
If your data estate is inconsistent, AI will surface that mess very quickly and make it

1354
01:13:02,980 --> 01:13:04,060
visible to everyone.

1355
01:13:04,060 --> 01:13:08,660
When we talk about maturity, we are really talking about the ability of a business to produce

1356
01:13:08,660 --> 01:13:11,020
trustworthy outcomes under normal pressure.

1357
01:13:11,020 --> 01:13:12,820
That is the standard we should aim for.

1358
01:13:12,820 --> 01:13:16,540
It's a perfect way to find the right outcomes for the future.

1359
01:13:16,540 --> 01:13:20,380
This matters for executives, architects and security teams alike because the question isn't

1360
01:13:20,380 --> 01:13:22,260
whether you have a governance policy.

1361
01:13:22,260 --> 01:13:25,660
The real question is what kind of business reality that governance is actually producing

1362
01:13:25,660 --> 01:13:27,260
for your people every day.

1363
01:13:27,260 --> 01:13:31,900
If I were sitting in your shoes tomorrow morning, I would keep the next steps very simple.

1364
01:13:31,900 --> 01:13:35,740
Start by using the 5 question maturity check with your leadership team this week and then

1365
01:13:35,740 --> 01:13:38,700
choose one specific metric to track in each proof area.

1366
01:13:38,700 --> 01:13:42,580
Focus on your audit time, your exposure levels and your AI readiness to get a clear picture

1367
01:13:42,580 --> 01:13:43,580
of where you stand.

1368
01:13:43,580 --> 01:13:46,580
Once you have those numbers, ask one harder question.

1369
01:13:46,580 --> 01:13:50,340
Where does good behavior still depend on someone's memory instead of the system's design?

1370
01:13:50,340 --> 01:13:54,060
That is usually where you'll find the hidden upgrade point that offers the most value.

1371
01:13:54,060 --> 01:13:57,780
You should start there because once your controls become consistent and measurable, the entire

1372
01:13:57,780 --> 01:13:59,740
environment changes for the better.

1373
01:13:59,740 --> 01:14:03,380
When you reach that point, trust rises and friction drops significantly.

1374
01:14:03,380 --> 01:14:07,820
Microsoft 365 finally stops behaving like a risky container you have to worry about and

1375
01:14:07,820 --> 01:14:10,620
starts behaving like a true operating system for the business.

1376
01:14:10,620 --> 01:14:14,980
If you audited your structural resilience the same way you audited your finances, what

1377
01:14:14,980 --> 01:14:18,980
would you find and is that system designed to sustain your growth or slowly drain your

1378
01:14:18,980 --> 01:14:21,180
resources over time?

1379
01:14:21,180 --> 01:14:25,740
Maturity in Microsoft 365 isn't defined by the tools you install, but by how predictably

1380
01:14:25,740 --> 01:14:28,660
your environment behaves when the pressure is actually on.

1381
01:14:28,660 --> 01:14:32,060
It's a system outcome and if you want more executive breakdowns on co-pilot as your

1382
01:14:32,060 --> 01:14:36,700
and structural resilience, subscribe to the M365FM podcast and leave a review.

1383
01:14:36,700 --> 01:14:40,780
with me on LinkedIn, then tell me which part of your environment you want me to audit next.

Mirko Peters Profile Photo

Founder of m365.fm, m365.show and m365con.net

Mirko Peters is a Microsoft 365 expert, content creator, and founder of m365.fm, a platform dedicated to sharing practical insights on modern workplace technologies. His work focuses on Microsoft 365 governance, security, collaboration, and real-world implementation strategies.

Through his podcast and written content, Mirko provides hands-on guidance for IT professionals, architects, and business leaders navigating the complexities of Microsoft 365. He is known for translating complex topics into clear, actionable advice, often highlighting common mistakes and overlooked risks in real-world environments.

With a strong emphasis on community contribution and knowledge sharing, Mirko is actively building a platform that connects experts, shares experiences, and helps organizations get the most out of their Microsoft 365 investments.