In this episode of the M365.fm podcast, Microsoft MVP Alan Cox joins us to discuss how organizations can securely adopt Microsoft 365 Copilot using Microsoft Purview, Data Loss Prevention (DLP), and Insider Risk Management.
As AI becomes increasingly integrated into daily work, protecting sensitive business data while enabling productivity is becoming a major priority for IT and security teams. Alan explains how Microsoft Purview helps organizations manage data governance, reduce oversharing risks, and apply security controls that work alongside Microsoft 365 Copilot.
The conversation explores how DLP policies can help prevent sensitive information from being exposed through AI-powered experiences, how Insider Risk Management can identify potentially risky user behavior, and why adaptive protection is changing the way businesses approach security and compliance in Microsoft 365.
Alan also shares practical guidance around Copilot readiness, governance strategies, compliance considerations, and common mistakes organizations make when deploying AI tools without the right data protection controls in place.
This episode is ideal for Microsoft 365 administrators, security professionals, compliance teams, and IT leaders looking to better understand how to balance AI innovation with strong security and governance practices across the Microsoft ecosystem.
Listen now to discover how Microsoft Purview can help organizations confidently deploy Microsoft 365 Copilot while keeping sensitive information secure and compliant.
You face new security challenges as you bring AI into your organization. Nearly 70% of Fortune 500 companies use Microsoft 365 Copilot, and 75% of knowledge workers rely on AI in their daily work. With this rapid adoption, sensitive data in Microsoft 365 apps becomes more exposed than ever. The US Congress banned Microsoft Copilot due to concerns over data breaches. Researchers found that attackers could target vulnerabilities in Microsoft 365 Copilot through prompt injection.
| Risk Type | Description |
|---|---|
| Uncontrolled exposure of confidential files | Misconfigured permissions in OneDrive and SharePoint can lead to sensitive documents leaking. |
| Data leakage through prompts | Traditional tools miss AI prompt risks, making it easier for data to leave the organization. |
You need solutions like Microsoft Purview, DLP, and Insider Risk Management to protect your information with proactive, automated controls.
Key Takeaways
- Microsoft 365 Copilot increases the risk of data breaches due to its AI-driven data access. Protect sensitive information proactively.
- Use Microsoft Purview to classify and label sensitive data, ensuring proper handling and compliance with regulations.
- Automate the application of sensitivity labels to reduce human error and maintain consistent data protection across Microsoft 365.
- Implement Data Loss Prevention (DLP) policies to block sensitive information from being included in AI-generated content.
- Monitor user interactions with Copilot to detect risky behaviors and ensure compliance with data protection policies.
- Establish clear insider risk policies to manage potential threats from employees using AI tools like Copilot.
- Regularly audit and review your security policies to adapt to new risks and ensure compliance with legal requirements.
- Train users on responsible AI tool usage to minimize the risk of accidental data leaks and enhance overall security.
Why Microsoft Copilot Needs Protection
AI Data Exposure Risks
You face unique risks when you use Microsoft Copilot. Unlike traditional software, Copilot uses AI to access and combine information from many sources in Microsoft 365. This process increases the chance of a data breach because Copilot can pull sensitive data from places you might not expect. Security controls designed for human users do not always protect against AI-driven data retrieval. Copilot can find and share data without you taking direct action, which means confidential files can appear in AI-generated outputs.
Here are some reasons Copilot needs extra protection:
- Copilot can access data across multiple platforms, raising the risk of permission oversharing.
- AI behavior is dynamic and can expose sensitive information unintentionally.
- Security telemetry in Microsoft 365 does not always track AI-driven data synthesis, making it harder to spot risks.
Sensitive data types at risk include:
| Type of Sensitive Data | Risk Description |
|---|---|
| Confidential files | Misconfigured permissions can expose sensitive documents to unintended users. |
| Personal Identifiable Information (PII) | Lack of visibility into AI interactions increases the risk of exposure. |
| Proprietary code | Insider misuse can lead to unauthorized access and exposure. |
| Sensitive business information | Unintentional data leakage through AI-generated outputs can distribute sensitive data widely. |
A leakage incident can happen when Copilot generates content that includes confidential information, even if you did not intend to share it.
Compliance and Regulatory Demands
You must meet strict compliance requirements when you use AI tools like Microsoft Copilot. Regulatory frameworks such as GDPR, HIPAA, ISO 42001, and the NIST AI Risk Management Framework set rules for data protection and transparency. These frameworks require you to minimize data use, explain automated decisions, and keep records of AI activity.
Note: Compliance programs often overlook AI-generated outputs, which can create gaps in demonstrating proper data access and sharing.
AI systems face unique compliance challenges. You must ensure transparency and explainability in AI decisions. You also need to manage bias and ethical concerns, which are not typical for traditional IT systems. Regulatory bodies now demand specific controls for AI governance, including risk management processes and comprehensive recordkeeping.
Insider Threats in AI Environments
You must watch for insider threats when you use AI tools. Microsoft Copilot can suggest code snippets or content that may contain sensitive information. Developers and employees might unintentionally misuse these suggestions, leading to data leakage or compliance violations. Over-permissioning and shadow AI use can increase risks, as employees may access data they should not see.
Common insider threats include:
- Unintentional data leakage through AI-generated outputs.
- Compliance violations when sensitive data is processed without proper auditing.
- Security breaches caused by misuse of access privileges.
You need strong protection to guard against these risks and keep your data secure.
Microsoft Purview for Copilot Security

You need strong tools to protect your organization’s data as you use Microsoft Copilot. Microsoft Purview gives you a complete framework for data protection, security, and compliance. You can use Purview to classify, label, and monitor sensitive data across Microsoft 365, including Copilot workflows. This section explains how you can set up Purview, automate governance, and keep your data secure.
Data Classification and Sensitivity Labels
You must identify and tag sensitive data before you can protect it. Microsoft Purview uses advanced detection methods to find sensitive information in your environment. These methods include sensitive information types and trainable classifiers. They scan user prompts and responses during AI interactions. When you apply sensitivity labels, you add a visible layer of data protection. Labels show up in apps like Word and Outlook, so users know when they handle confidential content.
Automating Label Application
You do not need to rely on users to label every file or email. Purview can automate the process. It scans your data and applies the right sensitivity label based on the content. This automation reduces human error and ensures consistent data protection. For example, if a document contains personal identifiable information, Purview can tag it with a “Confidential” label. You can set up rules to trigger automatic labeling for files stored in SharePoint, OneDrive, or Teams.
- Purview uses trainable classifiers to detect patterns in your data.
- Sensitivity labels can apply encryption and rights management.
- Labels follow the data, even if you move it outside your Microsoft 365 tenant.
This approach helps you meet compliance requirements and keeps your data secure, even when users work with AI tools like Copilot.
Restricting AI Access to Sensitive Data
You can control what data Copilot can access by using sensitivity labels and policies. When you apply a label that requires encryption, only users with the right permissions can access the data through AI applications. Copilot will not return or use data that users cannot access. This restriction protects your most sensitive information from accidental exposure.
Tip: Use Azure Information Protection with Purview to add another layer of security. You can combine sensitivity labels with encryption and access controls for maximum data protection.
Policy Creation and Enforcement
You need clear policies to enforce data protection and security. Microsoft Purview lets you create and manage these policies from a single portal. You can set up rules that block Copilot from displaying labeled content. You can also define access policies using Microsoft Entra and require multi-factor authentication for users who access sensitive data.
Here are the practical steps for integrating Purview with Copilot:
- Access the Microsoft Purview portal with your admin credentials.
- Review existing policies for Copilot locations, such as SharePoint and Teams.
- Create or update DLP policies to include email and other critical locations.
- Use Azure Information Protection to apply encryption and rights management.
- Notify stakeholders about new policies and provide training.
- Monitor and audit policy effectiveness regularly.
- Update documentation and communicate changes to all users.
- Plan for future enhancements and review compliance with legal teams.
You should start with a pilot group to test your policies. Audit your resources for over-permissioned access. Apply sensitivity labels early and restrict external sharing. Harden conditional access with sign-in risk policies and use privileged identity management to control admin rights.
“The Microsoft 365 admin center is becoming the place where controls come together. Policies, observability, and configuration are in a single experience, so admins don’t have to hunt across multiple portals. That consolidation makes it easier for us to understand how AI is behaving in our tenant and what controls we have available to guide it.”
With Purview, you can automate governance for AI tools like Copilot. You get a streamlined admin experience and better visibility into your security posture.
Monitoring Copilot Data Usage
You must monitor how users interact with Copilot to detect risks and ensure compliance. Microsoft Purview tracks user interactions, data security events, and regulatory compliance metrics. You can see when users access sensitive data, detect internal risks like IP theft, and investigate potential insider threats.
- Track user activity and data access in real time.
- Detect and respond to data leakage or security violations.
- Monitor compliance with business, legal, and regulatory requirements.
- Identify electronic information for legal cases and prevent unauthorized deletion.
- Retain necessary content and delete unnecessary data as required.
Purview supports pseudonymization of usernames for privacy. You can search content across Microsoft 365 services and ensure that your data protection policies are working. Regular audits help you adapt your controls as your organization grows.
By using Microsoft Purview, you build a strong foundation for data security and compliance. You protect sensitive data, automate governance, and gain full visibility into your AI environment.
DLP and Data Loss Prevention for Microsoft Copilot

You need strong data loss prevention strategies to protect your organization as you use Microsoft Copilot. DLP helps you block sensitive information from appearing in AI-generated content and keeps your data secure. You can use DLP capabilities to monitor, restrict, and respond to risky actions in real time. Microsoft Purview gives you the tools to set up DLP policies that fit your business needs.
Creating DLP Policies for Copilot
You must create DLP policies that work specifically with Microsoft Copilot. These policies help you control how sensitive information moves through AI prompts and outputs. You can follow these steps to set up DLP for Copilot:
- Access the Microsoft Purview Data Security Posture Management portal.
- Choose your objective to prevent data exposure in Microsoft 365 Copilot.
- Start with the guided workflow and apply the one-click DLP policy in simulation mode.
- Customize the policy in the DLP portal or Microsoft 365 Admin Center.
- Create a DLP custom policy and specify Microsoft 365 Copilot as the location.
- Define rules for sensitive information types and actions to restrict Copilot processing.
You can tailor these policies to your organization’s needs. Simulation mode lets you test the policy before enforcing it. You can adjust rules to block, audit, or redact sensitive data.
Blocking Sensitive Data in AI Prompts
DLP policies block sensitive information from being used in Copilot prompts. When a user tries to include confidential data, the policy detects the attempt and stops Copilot from processing the request. For example, if someone enters a social security number in a prompt, Copilot will decline to handle the query. This approach protects your data and educates users about proper handling.
- DLP policies operate with both app and chat functions in Microsoft 365 Copilot.
- The policies ensure sensitive data is not processed or exposed.
- Users learn to avoid risky actions as DLP policies guide them.
You can combine sensitivity labels and DLP automation to simplify compliance. Future enhancements may include stricter enforcement and monitoring repeated attempts to use blocked data types.
Customizing DLP for Copilot Scenarios
You can customize DLP solutions for Copilot-specific scenarios. Inline DLP policy enforcement scans every Copilot prompt and AI-generated output for sensitive data patterns before content reaches the end user. You can tailor detection mechanisms to recognize traditional and AI-specific risks, such as long-form content, code snippets, or hidden references to client data.
- Automate escalation and incident handling by routing DLP violations to security teams for triage and remediation.
- Secure or quarantine outputs pending review to minimize exposure.
- Integrate DLP with logging and monitoring for compliance and forensic investigation.
- Test policies regularly and run negative scenario drills to validate effectiveness.
Collaborate with security, data owners, and legal teams to tune rules and reduce false negatives. Review DLP incident data and user feedback to adjust classification logic and keep controls aligned with evolving AI workflows.
DLP Alerts and Incident Response
You must respond quickly when DLP alerts trigger. Investigation starts with evidence collection to determine the cause and impact. Use the Microsoft Defender portal to manage DLP alerts and filter incidents to focus on the most critical cases. Take immediate action to isolate affected systems and limit access to exposed data.
- Keep accurate logs and evaluate the scope of each incident.
- Notify affected parties promptly to ensure transparency.
- Practice incident response exercises and update your plan regularly.
You can use tools like the DLP alert management dashboard, activity explorer, and content explorer to collect evidence and track file activities. These tools help you understand what happened and guide your response.
Tip: Regular drills and reviews help you stay prepared for real incidents. Update your response plan as your organization grows.
Best Practices for DLP Setup
You can follow best practices to set up DLP for Microsoft Copilot in enterprise environments. These practices help you maximize protection and minimize risk.
| Best Practice | Description |
|---|---|
| Enable DLP | Configure DLP policies in Microsoft Purview to detect and block sensitive information from being included in Copilot-generated content. |
| Configure DLP Policies | Set rules that audit, block, or redact sensitive information inside prompts and responses, recognizing Copilot as a standalone policy location. |
You can use endpoint data loss prevention to extend protection to devices and monitor risky actions. Endpoint data loss prevention helps you control data movement and prevent leaks from endpoints.
You should enable DLP, configure policies, and involve stakeholders in policy development. Test your policies regularly and use feedback to improve detection and response. Integrate DLP with logging and monitoring for compliance and audits.
Note: DLP policies and automation simplify compliance and keep your organization secure as you adopt AI tools like Microsoft Copilot.
Managing Insider Risks with Purview
You need to manage insider risks as you use Microsoft Copilot in your organization. Microsoft Purview gives you tools to detect, investigate, and respond to risky behaviors that could lead to data exfiltration or compliance issues. You can use these tools to protect your sensitive data and keep your business safe.
Detecting Risky Copilot Behaviors
You must watch for signs of risky activity when employees use Copilot. Purview helps you spot these behaviors by monitoring for prompt injection attacks, tracking access to protected materials, and flagging inappropriate communications. You also get insights from Microsoft Defender XDR, which gives you a full view of AI-related risks.
Unusual Access Patterns
You can set up alerts for abnormal usage patterns. For example, if someone tries to access sensitive topics through Copilot more often than usual, Purview will notify you. You can establish normal usage patterns for each user, role, and business unit. When someone acts outside these patterns, Purview uses machine learning or rules-based scoring to flag the activity. This helps you catch insider threats before they lead to data exfiltration.
Risky Prompts and Data Exfiltration
You need to look for prompts that could cause data exfiltration. Purview scans Copilot prompts and outputs for signs of risky behavior. If an employee tries to use Copilot to extract confidential information, Purview will detect and block the attempt. You can also track repeated attempts to access or share sensitive data. This approach helps you stop insider risks before they become bigger problems.
Insider Risk Policy Configuration
You can configure insider risk policies in Purview by following a few key steps:
- Establish secure defaults. Enforce Restricted Access Control for critical sites and turn off company-wide sharing groups.
- Set up secure guardrails. Use auto-labeling and DLP policies to keep sensitive data safe from Copilot misuse.
- Continuously enforce and improve your guardrails. Use Purview reporting and risk assessments to check your protection and investigate AI usage.
Tip: Review your policies often. Update them as your organization changes or as new AI features become available.
Real-World Insider Risk Scenarios
You can see how insider risks appear in different departments. The table below shows examples of how Copilot can create risk if you do not manage access and classification:
| Department | Scenario Description |
|---|---|
| Finance | A financial analyst uses Copilot to generate a report that may inadvertently include unreleased earnings data if not properly classified. |
| HR | An HR manager compiles a report that could expose sensitive employee information due to overly permissive access controls. |
| R&D | A product development team risks exposing confidential information about upcoming products when using Copilot for brainstorming. |
| Marketing | A marketing team analyzes focus group feedback, potentially sharing sensitive participant information without proper classification. |
You can reduce these risks by using Microsoft Purview to monitor, classify, and protect your data. This approach helps you prevent data exfiltration and keeps your organization secure.
Integrating Purview, DLP, and Insider Risk Management
Unified Security Controls for Copilot
You can strengthen your security by integrating Purview, DLP, and Insider Risk Management. These tools work together to give you a single control point for Copilot. Security Copilot in Purview analyzes information from DLP and Insider Risk Management. It uses a promptbook to run multiple prompts in sequence, which helps you get integrated results. This approach improves your security framework and makes it easier to manage risks.
- You can see how data moves across your environment.
- You can detect risky behavior from both users and AI agents.
- You can enforce policies that protect your sensitive information.
Purview correlates data classification, user actions, and policy coverage. This process helps you spot real-time risks, such as oversharing through AI. You receive actionable recommendations to close any gaps in your security posture.
Dynamic Policy Enforcement
You need dynamic policy enforcement to keep up with changing threats. Purview’s Data Security Posture Management for AI gives you a centralized dashboard. You can monitor AI activity and assess risks in one place. You can also enforce compliance policies across Copilot and other AI applications.
- Set up your policies in Purview.
- Monitor how users interact with Copilot.
- Adjust your rules based on alerts and new risks.
- Apply changes quickly to keep your security strong.
This process helps you respond to threats as they happen. You do not have to wait for a manual review. You can automate enforcement and make sure your organization stays protected.
Tip: Review your policies often. Update them when you see new patterns or threats.
Building a Security Dashboard
A security dashboard gives you a clear view of Copilot activity. You can track prompts, responses, and user actions. The dashboard helps you spot problems early and take action fast.
| Feature | Description |
|---|---|
| Centralized Prompt and Response Logs | Capture all Copilot prompts and responses with timestamps and user attribution for traceability. |
| Monitoring Tool Integration | Integrate Copilot logs with SIEM tools for automated detection of risky behavior. |
| Automated Incident Detection and Alerting | Set up alerts for abnormal usage patterns to support proactive defense. |
| Log Retention and Compliance Controls | Enforce policy-based retention with secure storage for compliance. |
| Monitor Microsoft Graph and API Access Patterns | Track activity through Microsoft Graph API logs to detect unusual access behavior. |
| Audit Logging and Data Access Visibility | Log all Copilot activity within Microsoft 365’s unified audit pipeline for tracking. |
You can use these features to improve your security. You can see where risks start and stop them before they grow. A strong dashboard helps you keep your organization safe as you use AI tools.
Monitoring and Insights for Copilot Security
Real-Time Activity Monitoring
You need to see what happens in your environment as it happens. Real-time activity monitoring gives you the power to spot risks before they grow. You can use several tools and methods to track Copilot activity:
- Centralized prompt and response logs help you trace every Copilot interaction.
- Monitoring tool integration connects Copilot logs to SIEM tools like Microsoft Sentinel for automated detection.
- Automated incident detection and alerting set up notifications for abnormal usage patterns.
- Log retention and compliance controls keep records safe and easy to review.
- Behavioral baseline analytics establish normal usage patterns so you can find anomalies.
- Unified visibility across Microsoft 365, Azure, and Copilot activity shows who accesses sensitive content.
- Continuous monitoring for sensitive data access tracks how Copilot interacts with your information.
- Automated detection of misconfigured permissions finds issues that may expose sensitive data.
- Policy enforcement for ai-driven workflows defines rules for what Copilot can access.
- Context-aware alerts for anomalous Copilot behavior provide real-time warnings.
Tip: Set up alerts for unusual activity. You can act quickly when you see something out of the ordinary.
Analytics and Reporting
You can use analytics and reporting to improve your Copilot security outcomes. These tools help you understand what is happening and make better decisions.
- Human oversight lets you validate ai-generated security outputs with your expertise.
- Verification tools check the accuracy of insights and help you trust your reports.
- Feedback mechanisms allow you to share input and improve the relevance of outputs.
- Optimize SCU consumption by monitoring usage and fine-tuning data sources for efficiency.
You can review dashboards and reports to see trends and patterns. You can use these insights to adjust your policies and keep your environment safe.
Note: Analytics help you spot risks early and respond before they become bigger problems.
Compliance Audits and Reviews
You must conduct regular audits and reviews to meet compliance requirements. These checks help you prove that your controls work and that you protect sensitive information. The table below shows key requirements for auditing Copilot usage:
| Requirement Type | Details |
|---|---|
| Regular Audits and Reviews | Quarterly assessment of data processing activities; annual DPIA updates and risk reassessments; continuous monitoring of privacy control effectiveness; regular training updates for IT and compliance teams |
| Documentation and Governance | Maintain comprehensive records of processing activities; document all privacy control implementations and changes; establish clear escalation procedures for privacy incidents; regular communication with data protection officers and legal teams |
| Technology Monitoring | Stay informed about Microsoft 365 feature updates and privacy implications; monitor Worklytics platform enhancements and new privacy features; assess third-party integrations for privacy impact; plan for technology refresh cycles and compliance implications |
You can use these requirements to build a strong audit program. You keep your organization ready for regulatory reviews and show that you follow best practices.
Callout: Regular audits help you stay compliant and build trust with your stakeholders.
You can secure Microsoft Copilot by using a layered, automated approach. Combining Microsoft Purview, DLP, and Insider Risk Management gives you strong protection for your data as you adopt ai tools. Stay proactive by following best practices:
- Start with a pilot group and audit permissions.
- Apply sensitivity labels and restrict external sharing.
- Enable DLP and monitor activity logs.
- Train users on responsible use.
Explore solutions like Opsin, Wiz, CrowdStrike Falcon, and Splunk Enterprise Security to strengthen your security posture.
FAQ
What is Microsoft Purview and how does it help secure Copilot?
You use Microsoft Purview to classify, label, and monitor sensitive data in your environment. This tool helps you automate governance and enforce security policies, making it easier to protect information when you use Copilot.
How do sensitivity labels work with Copilot?
You apply sensitivity labels to files and emails. These labels control who can access the content. When you use Copilot, the system respects these labels and prevents unauthorized users from seeing protected information.
Can I block Copilot from accessing certain data?
Yes. You set up policies that restrict Copilot’s access to specific data types or locations. These controls help you prevent accidental exposure of confidential information during AI interactions.
What should I do if a DLP alert triggers?
You review the alert details in your security dashboard. Investigate the incident, collect evidence, and take action to contain any risk. Regular drills help you respond quickly and keep your organization safe.
How does Insider Risk Management detect risky Copilot behavior?
You monitor user activity for unusual patterns, such as repeated attempts to access sensitive files. The system uses analytics to flag risky prompts or actions, helping you stop threats before they cause harm.
Do I need to update my compliance program for Copilot?
Yes. You should review your compliance policies to include AI-generated outputs. Regular audits and documentation help you meet regulatory requirements and show that you protect sensitive information.
How can I monitor Copilot activity in real time?
You use centralized logs and dashboards to track prompts, responses, and user actions. Real-time monitoring lets you spot risks early and respond before issues grow.
Is training required for users working with Copilot?
Yes. You provide training on responsible Copilot use and data protection. This helps users understand security policies and reduces the chance of accidental data leaks.
🚀 Want to be part of m365.fm?
Then stop just listening… and start showing up.
👉 Connect with me on LinkedIn and let’s make something happen:
- 🎙️ Be a podcast guest and share your story
- 🎧 Host your own episode (yes, seriously)
- 💡 Pitch topics the community actually wants to hear
- 🌍 Build your personal brand in the Microsoft 365 space
This isn’t just a podcast — it’s a platform for people who take action.
🔥 Most people wait. The best ones don’t.
👉 Connect with me on LinkedIn and send me a message:
"I want in"
Let’s build something awesome 👊
1
00:00:00,000 --> 00:00:06,360
Yeah, hello again to another episode of the 365 AMP podcast.
2
00:00:06,360 --> 00:00:11,360
And today I have Alcox here with me, MVP.
3
00:00:11,360 --> 00:00:14,520
And we talk about protecting Microsoft co-pilot
4
00:00:14,520 --> 00:00:18,400
with PUE, UDLP, and insider risk.
5
00:00:18,400 --> 00:00:25,680
And yeah, Alan, I think for listeners who may not know you yet,
6
00:00:25,680 --> 00:00:28,400
can you introduce yourself and what you
7
00:00:28,400 --> 00:00:31,600
are trying to do in the Microsoft space?
8
00:00:31,600 --> 00:00:38,320
Yeah, so Alan Cox, I am based in Missouri in the United States.
9
00:00:38,320 --> 00:00:43,040
And I am an MVP of both 365 and co-pilot,
10
00:00:43,040 --> 00:00:46,480
although most of my professional direction
11
00:00:46,480 --> 00:00:49,400
is really around the governance and the purview space.
12
00:00:49,400 --> 00:00:53,040
And so really kind of combining those,
13
00:00:53,040 --> 00:00:55,920
we hear a lot of people talking about AI governance and all
14
00:00:55,920 --> 00:00:56,240
of that.
15
00:00:56,240 --> 00:00:59,840
And that's exactly what really my focus is,
16
00:00:59,840 --> 00:01:05,280
but not even just around AI, but purview as a whole in governance
17
00:01:05,280 --> 00:01:05,960
and all of that.
18
00:01:05,960 --> 00:01:09,800
So I do work as a director of Microsoft governance.
19
00:01:09,800 --> 00:01:11,840
And of course, AI is a big part of that.
20
00:01:11,840 --> 00:01:18,080
And that's what we will be talking about also,
21
00:01:18,080 --> 00:01:20,920
delivered courses on you to me around purview,
22
00:01:20,920 --> 00:01:23,080
as well as in 365 and adoption.
23
00:01:23,080 --> 00:01:28,320
And so as many of the user groups and conferences
24
00:01:28,320 --> 00:01:32,280
that I can, rather than that, Oliver LinkedIn and YouTube,
25
00:01:32,280 --> 00:01:35,800
so yeah, happy to be here.
26
00:01:35,800 --> 00:01:37,640
Can you--
27
00:01:37,640 --> 00:01:41,520
I see a lot of people use governance, security, compliance,
28
00:01:41,520 --> 00:01:44,800
risk as one topic.
29
00:01:44,800 --> 00:01:48,560
How will you say what's different governance
30
00:01:48,560 --> 00:01:51,720
from the other things?
31
00:01:51,720 --> 00:01:55,640
Yes, so for me, compliance is really kind of looking back.
32
00:01:55,640 --> 00:01:59,160
Compliance is kind of looking back at--
33
00:01:59,160 --> 00:02:04,120
and more, are we meeting the regulatory things
34
00:02:04,120 --> 00:02:08,200
that we need to, we're depending upon what our industry is?
35
00:02:08,200 --> 00:02:10,760
Whereas governance is kind of looking forward.
36
00:02:10,760 --> 00:02:13,600
Governance is kind of looking at, how
37
00:02:13,600 --> 00:02:17,480
can we prevent things from happening?
38
00:02:17,480 --> 00:02:20,040
I always tell clients, part of my job
39
00:02:20,040 --> 00:02:22,560
is to help keep you off the news.
40
00:02:22,560 --> 00:02:26,560
Because if you have a major break or breach,
41
00:02:26,560 --> 00:02:29,440
then you're going to end up being on the news likely,
42
00:02:29,440 --> 00:02:32,040
especially if you're a high profile of finance firm.
43
00:02:32,040 --> 00:02:34,160
So that's really kind of the difference for me.
44
00:02:34,160 --> 00:02:37,880
But also governance is more than just control.
45
00:02:37,880 --> 00:02:40,720
I just drafted an article on this, actually.
46
00:02:40,720 --> 00:02:43,680
It's not-- in fact, it just came out, I think, yesterday.
47
00:02:43,680 --> 00:02:47,600
It's not even about-- it's not about control.
48
00:02:47,600 --> 00:02:51,320
It's just having process in place to make things safer.
49
00:02:51,320 --> 00:02:53,280
It's not just, well, we've got a policy
50
00:02:53,280 --> 00:02:56,240
or we've got a control for this.
51
00:02:56,240 --> 00:02:58,160
It's looking at the big picture.
52
00:02:58,160 --> 00:03:00,960
But also, when you think about it, something
53
00:03:00,960 --> 00:03:06,200
that's big here in the US and Midwest is NASCAR racing.
54
00:03:06,200 --> 00:03:08,320
And one of the things they talk about a lot
55
00:03:08,320 --> 00:03:10,480
is a governor on those cars.
56
00:03:10,480 --> 00:03:13,600
And what that means is it slows them down.
57
00:03:13,600 --> 00:03:17,720
So governance isn't necessarily just rules and controls
58
00:03:17,720 --> 00:03:18,120
and all that.
59
00:03:18,120 --> 00:03:19,320
It's part of that.
60
00:03:19,320 --> 00:03:23,040
But it's also like, let's pump the brakes a little bit.
61
00:03:23,040 --> 00:03:26,360
And let's slow things down and look at this from the big picture.
62
00:03:26,360 --> 00:03:30,320
So for me, that's where really governance fits.
63
00:03:30,320 --> 00:03:36,480
And how was your way to, I say, a peer view?
64
00:03:36,480 --> 00:03:38,520
This is a special topic.
65
00:03:38,520 --> 00:03:43,000
So what would bring you in this topic?
66
00:03:43,000 --> 00:03:45,800
I'm sorry, I didn't quite understand the question.
67
00:03:45,800 --> 00:03:50,680
What you are an expert in peer view,
68
00:03:50,680 --> 00:03:55,120
and I think all these things are other things so cool.
69
00:03:55,120 --> 00:04:00,120
You can do teams and co-pilot and all the stuff sounds so cool.
70
00:04:00,120 --> 00:04:03,800
What was your interesting impure view?
71
00:04:03,800 --> 00:04:05,440
Are you landing there?
72
00:04:05,440 --> 00:04:06,480
Yeah.
73
00:04:06,480 --> 00:04:10,400
Really, it was kind of sort of an organic thing.
74
00:04:10,400 --> 00:04:15,680
It was because of these really started talking about AI and co-pilot.
75
00:04:15,680 --> 00:04:18,560
And everybody was like, oh, you know,
76
00:04:18,560 --> 00:04:23,960
kind of anti-AI initially because the fear of what these models
77
00:04:23,960 --> 00:04:25,720
may be doing in your environment.
78
00:04:25,720 --> 00:04:30,880
And while we definitely need to be taken a look at that,
79
00:04:30,880 --> 00:04:35,800
the natural gravitation from co-pilot
80
00:04:35,800 --> 00:04:39,400
and then securing co-pilot and making sure
81
00:04:39,400 --> 00:04:43,880
that or any other AI model that may be running in your environment
82
00:04:43,880 --> 00:04:45,920
and Microsoft has done a pretty good job
83
00:04:45,920 --> 00:04:49,360
at putting good controls in place.
84
00:04:49,360 --> 00:04:54,440
So something that I always tell clients is co-pilot or AI.
85
00:04:54,440 --> 00:04:57,000
I'll say co-pilot is really kind of more my specialty,
86
00:04:57,000 --> 00:05:00,720
but co-pilot doesn't necessarily introduce new risk.
87
00:05:00,720 --> 00:05:03,120
It can surface existing risk.
88
00:05:03,120 --> 00:05:06,560
So it can surface over-promissioning, over-sharing.
89
00:05:06,560 --> 00:05:10,480
But all of that to me kind of has to go together.
90
00:05:10,480 --> 00:05:16,280
So I think anybody that's really involved in a gente, co-pilot,
91
00:05:16,280 --> 00:05:20,320
or delivering ages like that really,
92
00:05:20,320 --> 00:05:24,640
I think almost have to be kind of in the governance space
93
00:05:24,640 --> 00:05:25,960
to some degree.
94
00:05:25,960 --> 00:05:27,120
Otherwise, it's just kind of, you know,
95
00:05:27,120 --> 00:05:30,200
Microsoft talks about a responsible AI, and that's part of it.
96
00:05:30,200 --> 00:05:32,240
So for me, it was just kind of a natural gravitation,
97
00:05:32,240 --> 00:05:37,120
but also just yet it, you know, I've done exchange work over the years.
98
00:05:37,120 --> 00:05:41,680
I've done a lot of other platforms inside of the 3C25 space.
99
00:05:41,680 --> 00:05:45,680
And Perview just kind of brings it all together for me.
100
00:05:45,680 --> 00:05:48,040
And so I just really, really enjoy it.
101
00:05:48,040 --> 00:05:50,520
And it's also an area that's misunderstood
102
00:05:50,520 --> 00:05:53,560
that people don't understand it.
103
00:05:53,560 --> 00:05:58,320
So it's just, it's nice for me to be able to help kind of shed the light on that.
104
00:06:00,000 --> 00:06:05,320
And how did, or I see an, an, an intra and I found it really interesting
105
00:06:05,320 --> 00:06:15,040
that AI models more handle like a personal ID than an, an application.
106
00:06:15,040 --> 00:06:24,040
So is there a change you see in peer view from, yeah, from, from what's bring AI to it?
107
00:06:26,200 --> 00:06:32,280
Yeah, yeah, and that's the other, that's the big part of it is AI.
108
00:06:32,280 --> 00:06:36,760
I mean, well, there's a, there's a, there's a big push
109
00:06:36,760 --> 00:06:42,520
and not just governance around, around the applications and around sensitivity,
110
00:06:42,520 --> 00:06:44,200
but also when it comes to identity.
111
00:06:44,200 --> 00:06:46,200
So on, we're seeing an, on,
112
00:06:46,200 --> 00:06:47,320
I'd be an, on,
113
00:06:47,320 --> 00:06:52,240
I'd identity governance is really kind of blended in through all of that.
114
00:06:52,240 --> 00:06:55,640
And we're seeing all that in Perview, you know, from,
115
00:06:55,640 --> 00:07:01,400
on trip, but also the interesting thing is is when I talk a lot about,
116
00:07:01,400 --> 00:07:07,000
I talk a lot about Perview, the integration between,
117
00:07:07,000 --> 00:07:13,160
with all of course the Microsoft 365 apps, but also the apps in,
118
00:07:13,160 --> 00:07:17,480
or what's going on in an enter ID and an interrupt, but also,
119
00:07:17,480 --> 00:07:24,000
you know, but also the identity governance and defender space as well,
120
00:07:24,000 --> 00:07:28,640
because they really are, are merging a lot of that together, you know,
121
00:07:28,640 --> 00:07:35,880
for example, a lot of the alerts that you configure that for DLP or for,
122
00:07:35,880 --> 00:07:40,880
inside of risk and all of that shows up in defender.
123
00:07:40,880 --> 00:07:45,000
So it's, yeah, so it's, so it's definitely see a lot of,
124
00:07:45,000 --> 00:07:51,440
a lot of the integration across other security platforms within three, two, five,
125
00:07:53,120 --> 00:07:58,960
identity inside of risks is probably one of my favorite features because it
126
00:07:58,960 --> 00:08:02,440
just starts and it's expanding so much as touching so many things.
127
00:08:02,440 --> 00:08:09,000
And it is also one of those things is hard to understand because it involves
128
00:08:09,000 --> 00:08:10,680
adaptive protection.
129
00:08:10,680 --> 00:08:13,000
It gets into conditional access.
130
00:08:13,000 --> 00:08:18,280
So, but it's nice to be able to, for me to kind of get ahold of it and then be
131
00:08:18,280 --> 00:08:25,160
able to explain that to and present that to, to security admins or to, you know,
132
00:08:25,160 --> 00:08:26,200
compliance officers.
133
00:08:26,200 --> 00:08:31,880
I've found that it's also interesting because there's, yeah,
134
00:08:31,880 --> 00:08:34,880
the people don't understand their risk.
135
00:08:34,880 --> 00:08:39,480
So the much, most people say, okay, inside of risk, it's, it's something
136
00:08:39,480 --> 00:08:47,080
who will do something evil, but not 90%
137
00:08:47,080 --> 00:08:50,360
that's the people don't know they do anything evil.
138
00:08:50,360 --> 00:08:51,480
Right.
139
00:08:51,480 --> 00:08:52,880
Exactly.
140
00:08:52,880 --> 00:08:55,760
And that is, and that is really where insider risk.
141
00:08:55,760 --> 00:09:00,520
I mean, a lot of people, the truth is most of your risk comes from within.
142
00:09:00,520 --> 00:09:08,160
And so, and, and a lot of it is, you know, just good, you know, good intent.
143
00:09:08,160 --> 00:09:12,280
Like you said, it's not necessarily people that are malicious or trying to
144
00:09:12,280 --> 00:09:17,680
do harm, but they just don't know. And like I always tell people, you know, I, you know,
145
00:09:17,680 --> 00:09:23,320
trying to protect you from yourself, basically because, you know, people, people
146
00:09:23,320 --> 00:09:28,680
can make mistakes or, but even if they're, you know, kind of, especially co-pilot,
147
00:09:28,680 --> 00:09:32,000
they're kind of poking around a little bit, asking some questions.
148
00:09:32,000 --> 00:09:36,720
Well, now we've got insider risk that can surface and say, well, okay,
149
00:09:36,720 --> 00:09:40,840
if somebody's asking about some things that may be of sensitive in nature,
150
00:09:42,040 --> 00:09:43,560
and so we've got that.
151
00:09:43,560 --> 00:09:48,280
And then of course it kind of spawns into, you know, communication compliance,
152
00:09:48,280 --> 00:09:54,080
DSPM and all of these other solutions within purview start firing up to that
153
00:09:54,080 --> 00:09:55,480
you may have configured.
154
00:09:55,480 --> 00:10:00,000
And so it's just really, even within purview, a lot of that is tightly integrated,
155
00:10:00,000 --> 00:10:04,280
which is one of the things that insider risk does.
156
00:10:04,280 --> 00:10:07,840
It really syncs signals to a lot of different areas within purview and
157
00:10:07,840 --> 00:10:12,000
collecting those from those two. So, but yeah, a lot of people, I think the vast
158
00:10:12,000 --> 00:10:20,720
majority of inside risk is coming from a, you know, non nefarious sort of intent.
159
00:10:20,720 --> 00:10:29,640
And what they're a particular customer scenario or security, the incident
160
00:10:29,640 --> 00:10:36,200
that changed how you think about AI governance or was this naturally,
161
00:10:37,560 --> 00:10:39,080
no, it's a topic.
162
00:10:39,080 --> 00:10:46,440
Yeah, I think it was mostly, mostly natural, but we do, you know,
163
00:10:46,440 --> 00:10:50,040
I do look, I mean, because you can go in and you can look at the activity
164
00:10:50,040 --> 00:10:54,200
explorer, the, you know, explorer, and you can with appropriate permissions,
165
00:10:54,200 --> 00:10:58,480
of course, you can, you can see some of the prompts that are actually, you know,
166
00:10:58,480 --> 00:11:04,040
put in there. And of course you can set up the alerts to, you know,
167
00:11:04,040 --> 00:11:08,600
to alert you of someone's, you know, targeting specific sensitive information or
168
00:11:08,600 --> 00:11:15,440
asking questions of, you know, like, you know, what's our corporate credit card
169
00:11:15,440 --> 00:11:18,920
number? And if somebody's accidentally shared this information somewhere,
170
00:11:18,920 --> 00:11:22,480
that will surface. And like I said, a lot of people may have permission to
171
00:11:22,480 --> 00:11:25,720
something. They just don't even know they have permission because somebody has
172
00:11:25,720 --> 00:11:30,240
overshared, doesn't mean they'd necessarily got an email saying that they've been
173
00:11:30,240 --> 00:11:33,120
given access. They could just be given access. And then,
174
00:11:34,080 --> 00:11:39,960
and then co-pilot can, can, can, can surface that. So it's not a particular
175
00:11:39,960 --> 00:11:44,680
situation, I don't think I think it's more, although we do, you know, we do run across,
176
00:11:44,680 --> 00:11:50,960
you know, those situations, but most of the time, I'm brought in to kind of
177
00:11:50,960 --> 00:11:55,680
look at their permission and sharing models and how that is.
178
00:11:55,680 --> 00:12:00,000
Because that's the end of the day, it comes down to data. What do you do with your
179
00:12:00,000 --> 00:12:04,000
data? So whether you're a highly regulated organization that has, you know,
180
00:12:04,000 --> 00:12:12,120
either FedRAMP, high trust or HIPAA or or SEC compliance regulatory,
181
00:12:12,120 --> 00:12:15,440
it comes down to what do you do with your data? That's what they're looking at.
182
00:12:15,440 --> 00:12:18,840
And so anytime we're doing a co-pilot readiness assessment,
183
00:12:18,840 --> 00:12:21,680
the biggest, one of the biggest things that I'm going to look at is, well,
184
00:12:21,680 --> 00:12:27,280
who has access to what? And I'll ask the client, do you know who has access to
185
00:12:27,280 --> 00:12:31,880
every bit of information in your environment? And if you don't, then maybe we need
186
00:12:31,880 --> 00:12:36,080
to, you know, take a look at that. So, and of course, nobody does.
187
00:12:36,080 --> 00:12:39,880
So the, the end of the day is, you know, that's where we need to put in some of
188
00:12:39,880 --> 00:12:43,400
these, some of these safeguards and controls. And purview doesn't really
189
00:12:43,400 --> 00:12:49,280
a job with policies that are really pretty built in that says,
190
00:12:49,280 --> 00:12:54,400
co-pilot is essentially off limits to this data if it contains this or this
191
00:12:54,400 --> 00:12:55,920
in, as you said, all those rules.
192
00:12:55,920 --> 00:13:02,800
And for people that are hearing puview for the first time,
193
00:13:02,800 --> 00:13:05,360
I'm not fully understanding it.
194
00:13:05,360 --> 00:13:10,360
Or say to, how do you explain it to the sea level?
195
00:13:10,360 --> 00:13:14,560
Oh, I'm not, that may not unfair. How do we explain it to a 10 year old
196
00:13:14,560 --> 00:13:17,320
or this Microsoft puview in simple terms?
197
00:13:17,320 --> 00:13:19,680
Yeah, it's, it's really,
198
00:13:20,520 --> 00:13:26,840
purview purview is really comes down to data protection,
199
00:13:26,840 --> 00:13:33,720
mostly internally, I find, whereas the defender, the security side is more,
200
00:13:33,720 --> 00:13:38,160
you know, perimeter outside, generally speaking.
201
00:13:38,160 --> 00:13:42,360
So the way I explain it is that purview, the name purview,
202
00:13:42,360 --> 00:13:48,520
it's Microsoft name game, right? So it's the name purview just really means
203
00:13:48,520 --> 00:13:52,680
things that are brought into focus. And so, you know, it's so cool. So, you know,
204
00:13:52,680 --> 00:13:57,360
and that's what it's doing. It's putting a spotlight on things that might have
205
00:13:57,360 --> 00:14:00,640
normally have been kind of hidden or under the curtains a bit.
206
00:14:00,640 --> 00:14:06,440
However, it comes from the old compliance portal. So it's basically,
207
00:14:06,440 --> 00:14:12,240
you know, I think of, I explain purview to three things.
208
00:14:12,240 --> 00:14:15,480
I don't talk about, if I'm talking just as somebody that doesn't understand purview,
209
00:14:15,480 --> 00:14:16,960
I'm not going to talk about insider risk.
210
00:14:17,280 --> 00:14:21,880
I'm going to talk about data loss prevention, who's sharing what and what,
211
00:14:21,880 --> 00:14:27,600
what information is leaving your organization? That's data loss prevention,
212
00:14:27,600 --> 00:14:30,280
data lead prevention, depending on how, you know, who's saying it.
213
00:14:30,280 --> 00:14:34,880
Then retention, how long do you have to keep data?
214
00:14:34,880 --> 00:14:40,600
That's important. And then labels and classification should,
215
00:14:40,600 --> 00:14:45,840
your sensitive information have a watermark on it or be prevented from sharing
216
00:14:45,840 --> 00:14:49,680
externally or whatever control you want to set on it.
217
00:14:49,680 --> 00:14:56,800
But identifying this set, this data set has sensitive information.
218
00:14:56,800 --> 00:15:02,000
So really it, you know, the easy in a nutshell,
219
00:15:02,000 --> 00:15:08,000
it's just, it's a protection of your data from, from overexposure.
220
00:15:08,000 --> 00:15:10,480
That's in a nutshell. That's really what purview is.
221
00:15:10,480 --> 00:15:14,800
All those tools just help do that and prevent that.
222
00:15:14,800 --> 00:15:21,880
Whether it's conversations or whether it's actual actions like sharing or permissions.
223
00:15:21,880 --> 00:15:31,680
So when we think about purview,
224
00:15:31,680 --> 00:15:40,440
what road will it place or how have the road changed since we have this topic,
225
00:15:40,440 --> 00:15:43,720
AI governance, I think it's really new form.
226
00:15:43,760 --> 00:15:47,560
I don't know for three years to know and a little bit in the future.
227
00:15:47,560 --> 00:15:49,760
What did you see? Yeah.
228
00:15:49,760 --> 00:15:59,120
Well, first of all, I've always told folks prior to the announcement of E7,
229
00:15:59,120 --> 00:16:05,920
I always said that I think, I think Microsoft is going to go to almost a purely
230
00:16:05,920 --> 00:16:12,880
agentic form of co-pilot that will be paid for and then anything that is,
231
00:16:12,880 --> 00:16:18,040
that the end user would use for productivity, summarizations, writing emails,
232
00:16:18,040 --> 00:16:22,000
those sort of things would kind of be baked into the free version
233
00:16:22,000 --> 00:16:24,320
because we're seeing that line yet really blurry.
234
00:16:24,320 --> 00:16:30,000
So, but with the, with the seven, of course, I said,
235
00:16:30,000 --> 00:16:35,080
all of that would happen or Microsoft would come up with a new skew that had
236
00:16:35,080 --> 00:16:38,120
all of that baked in and that's exactly what they did.
237
00:16:38,120 --> 00:16:42,000
So, but you do have additional governance things there.
238
00:16:42,000 --> 00:16:47,000
And so I think purview is really stepped up its game in the last few years to
239
00:16:47,000 --> 00:16:53,600
handle the, the influx of agents that are being released with agent 365,
240
00:16:53,600 --> 00:17:02,240
being able to monitor and manage that DSPM to see what agents are being used,
241
00:17:02,240 --> 00:17:07,640
to what sensitive information they're touching, activities from the end user.
242
00:17:07,640 --> 00:17:12,480
And then you've got of course, insider risk, DLP, all of these have specific
243
00:17:12,480 --> 00:17:19,440
co-pilot control built into them today and they didn't, you know, just a few years ago.
244
00:17:19,440 --> 00:17:22,200
You know, so, so we're definitely seeing a lot of that.
245
00:17:22,200 --> 00:17:24,760
I think it's going to continue to grow.
246
00:17:24,760 --> 00:17:29,080
What I'd like to see is I'd like to see a single admin center for co-pilot
247
00:17:29,080 --> 00:17:32,760
across the board instead of it, it's kind of scattered right now.
248
00:17:32,760 --> 00:17:34,120
That's what I'd like to see.
249
00:17:34,120 --> 00:17:38,160
And I think it'll happen is just, you know, you go into the admin center and you've got co-pilot,
250
00:17:38,160 --> 00:17:42,440
right? There's some controls there and then you've got agent 365 and then you get DSPM
251
00:17:42,440 --> 00:17:45,600
and then you've got, you know, this is kind of a little scattered everywhere,
252
00:17:45,600 --> 00:17:50,640
hoping that that that at some point it'll kind of come together like a single dashboard.
253
00:17:50,640 --> 00:17:56,560
Yeah, see there it's a see it on the, on the event it's worse.
254
00:17:56,560 --> 00:17:58,360
I don't know what it went as worse.
255
00:17:59,200 --> 00:18:01,840
That's a software company, Rencore.
256
00:18:01,840 --> 00:18:08,880
They make also a government solution that they have built a governance tool for all the other things
257
00:18:08,880 --> 00:18:12,360
and also for co-pilot, as well as really interesting to listen.
258
00:18:12,360 --> 00:18:15,600
I think, yeah, that's the admin center.
259
00:18:15,600 --> 00:18:18,200
It's, yeah, I think government.
260
00:18:18,200 --> 00:18:23,880
Yeah, well, yeah, I'm actually working with, I'm actually working with and doing some things
261
00:18:23,880 --> 00:18:33,240
with software company called E now and they're, they, they, they have an app governance dashboard.
262
00:18:33,240 --> 00:18:38,680
One of the things historically that purview was not good at and I'll say even before purview
263
00:18:38,680 --> 00:18:45,240
when it was the compliance center is that everything was kind of scattered everywhere and throughout.
264
00:18:45,240 --> 00:18:53,240
Whereas now we are seeing more of that unified dashboard so we can go into the compliance manager,
265
00:18:53,240 --> 00:18:59,160
for example, and see a lot of, you know, our compliance score, we can see, you know, how many sensitive
266
00:18:59,160 --> 00:19:01,760
information items have been shared and moved and all that.
267
00:19:01,760 --> 00:19:09,320
So we're getting better, but, but I do, I, I, I, I, I, I, I, but E now has some really good dashboards
268
00:19:09,320 --> 00:19:16,200
integrated right into all of that to help monitor and manage the control of AI revolution as we're seeing it now.
269
00:19:16,200 --> 00:19:22,360
So that's something that I have hooked into my, and they have an app governance that's, that's new.
270
00:19:22,360 --> 00:19:26,280
That's really good. So anyway, so there are some good products out there.
271
00:19:26,280 --> 00:19:33,000
I will say this and it brings up kind of a good, good point is that is I anytime I'm doing an evaluation on a client,
272
00:19:33,000 --> 00:19:37,640
always look at what are they doing from a third party perspective?
273
00:19:37,640 --> 00:19:45,080
In other words, what is, what are they doing that purview does or Microsoft can do?
274
00:19:45,080 --> 00:19:51,720
Because a lot of organizations just don't know. So they'll go and they'll buy a third party product
275
00:19:51,720 --> 00:19:55,480
to fill this one little gap when purview does just as good a job.
276
00:19:55,480 --> 00:19:58,280
Now it's not to say that Microsoft does as good as everybody else.
277
00:19:58,280 --> 00:20:02,040
There are areas and times where I would recommend and even Microsoft says,
278
00:20:02,040 --> 00:20:07,320
go get a third party on that because we're really not quite there yet on that salute,
279
00:20:07,320 --> 00:20:14,120
whatever that solution may be. So that is something that I look at is, you know,
280
00:20:14,120 --> 00:20:17,880
are you using a third party DLP? Well, why are we doing that?
281
00:20:17,880 --> 00:20:25,480
You're paying for an E5 or now an E7. Why would you do a third party DLP? Because I see no real reason
282
00:20:25,480 --> 00:20:30,920
for that. Maybe somebody has a good reason. But it's just so good now that I just don't see it,
283
00:20:30,920 --> 00:20:36,440
you know, the need for it. So that is something that we definitely, I addressed often with clients is
284
00:20:36,440 --> 00:20:42,520
evaluating, you're using this product, but Microsoft does it and probably does it as good. If not
285
00:20:42,520 --> 00:20:48,600
better, you're already paying for it. So, you know, that's a conversation I have often. So
286
00:20:48,600 --> 00:20:57,720
I come from the data sites and I have purview, yeah, treating a little bit like a data catalog.
287
00:20:57,720 --> 00:21:06,280
So I think it's, it's, it's, it came so powerful too in the last, yeah.
288
00:21:07,160 --> 00:21:14,280
Yeah. Two years. It's so awesome and so interesting. And, yeah, I think also you can do a lot of
289
00:21:14,280 --> 00:21:21,480
with purview and I think from my perspective that all the people say I'm up in center here,
290
00:21:21,480 --> 00:21:27,560
up in center there. Yeah, I don't know if this is the right thinking or it's more, yeah,
291
00:21:27,560 --> 00:21:36,200
you can use purview for centralised this, this, I think that's a good advice. And I have looks a little
292
00:21:36,200 --> 00:21:46,120
bit, what are the, the hot topics in AI governance in the Microsoft space? And the, the, yeah,
293
00:21:46,120 --> 00:21:56,600
the basketball that I found out is DSPM for AI. Why is it becoming such a critical capability?
294
00:21:56,600 --> 00:22:04,680
Well, and it's so much to where they, they, so Microsoft, you know, Microsoft, confusion is kind of
295
00:22:04,680 --> 00:22:11,880
their middle name sometimes. And so what they did is they had, they had DSPM inside purview, they had
296
00:22:11,880 --> 00:22:23,640
DSPM and then they had DSPM for AI. And then now they have DSPM classic DSPM AI classic and DSPM,
297
00:22:23,640 --> 00:22:29,560
the new one under preview. The classics are going to go away. So basically what they've done is they've
298
00:22:29,560 --> 00:22:34,600
taken the old DSPM data security posture management and they've rolled it into one platform.
299
00:22:34,600 --> 00:22:40,360
And they've included the AI components that were in DSPM for AI. I think it's a good move. And
300
00:22:40,360 --> 00:22:47,720
it's the right thing. I like centralised dashboards and controls. What it does, what I really like about
301
00:22:47,720 --> 00:22:55,960
it number one is it has all of the, literally the one or two click policy creation built into it. So,
302
00:22:55,960 --> 00:23:09,400
for example, if you want to limit AI from touching a particular sensitivity type, you literally can
303
00:23:09,400 --> 00:23:14,440
do a single click or one or two clicks and says, you know, you go into it and it'll be under the
304
00:23:14,440 --> 00:23:18,760
recommendations. And so you go there and you click and you create a policy and it creates a policy.
305
00:23:18,760 --> 00:23:25,400
And all it's doing is it's creating like a DLP policy in the DLP admin centre, but it's doing it
306
00:23:25,400 --> 00:23:33,000
from DSPM for AI. And it's just writing, you know, it's just writing the policy in the background.
307
00:23:33,000 --> 00:23:39,560
Do you go over to the DLP and there's the policy that you just created, but it's all can be all done
308
00:23:39,560 --> 00:23:45,960
from there, but it's focused on AI controls or copilot specifically, but you can do, I mean,
309
00:23:45,960 --> 00:23:51,000
it's not just that you can, you know, any chat, you can enterprise it's any environment or other
310
00:23:51,800 --> 00:23:57,000
AI tools that start just guarding and controlling copilot. Now, there are some licensing things that
311
00:23:57,000 --> 00:24:03,000
needs to happen there with PAs you go and so forth, right? So that's a different story. But it is,
312
00:24:03,000 --> 00:24:11,480
it does make it really easy to see what's going on in your environment. What are agents that are
313
00:24:11,480 --> 00:24:19,160
the out there, right? How often are the agents being used? What are they being used for? And even what are
314
00:24:19,160 --> 00:24:23,640
the prompts against those agents if you have the right permissions, the admin can go and see?
315
00:24:23,640 --> 00:24:29,000
So there's a lot of really cool tools there and I go in there fairly often because
316
00:24:29,000 --> 00:24:37,320
it's changing a lot. And again, it's not, it's not fully baked yet, but it is, they are basically
317
00:24:37,320 --> 00:24:42,040
taking the DSPM and the DSPM for AI and kind of rolled it into one platform now, which is good.
318
00:24:43,480 --> 00:24:52,360
I will a little bit stay in the AI usage and PUE view. How do PUE view discover and understand
319
00:24:52,360 --> 00:25:01,080
AI usage across organization? Yeah, well, it is, you know, it's definitely using,
320
00:25:01,080 --> 00:25:09,160
you know, all of the API calls and of course, anytime, anytime thing is being done,
321
00:25:09,160 --> 00:25:17,880
access inside of, you know, inside of a 365 environment, you know, it's, it's logging all of this
322
00:25:17,880 --> 00:25:28,040
information that's happening. And so it's interesting because, you know, when they announced it
323
00:25:28,040 --> 00:25:37,080
build last year that when you have a pro pilot for 365 license that you get, you get SharePoint
324
00:25:37,800 --> 00:25:42,200
SAM, SharePoint Advanced Management. And part of that of course includes additional
325
00:25:42,200 --> 00:25:49,160
governance and its pieces there. But one of the things that they, that I do like and that is the,
326
00:25:49,160 --> 00:25:57,720
that is a restricted content. Basically, you can restrict SharePoint site from co-pilot. So you can
327
00:25:57,720 --> 00:26:03,000
pretty much say co-pilot's off limits on that SharePoint site. However, it's a tad misleading because
328
00:26:03,000 --> 00:26:08,200
it's, it doesn't mean that co-pilot can't touch it. It means that it doesn't discover it. But if the
329
00:26:08,200 --> 00:26:14,280
user's already been to that site, co-pilot can still surface that information. So it's a, it's a
330
00:26:14,280 --> 00:26:21,400
little bit misleading. But so that was, that was a great start. But what co-, what purview has brought
331
00:26:21,400 --> 00:26:28,920
to the table with DSPM, specifically DSPM, but even it's baked into DLP. You have DLP controls
332
00:26:28,920 --> 00:26:36,200
that says if it contain, you know, if, if I tag it with this label or if I tag it with this sensitivity
333
00:26:36,200 --> 00:26:42,360
time, it's kind of, it's, you know, it's, it's, it's completely off limits. But basically it's,
334
00:26:42,360 --> 00:26:50,360
you know, it, it, it understands, of course, it understands, understands co-pilot and understands,
335
00:26:50,360 --> 00:26:58,520
you know, activities and AI activities that are happening against its applications and,
336
00:26:58,520 --> 00:27:09,960
in, in such within the environment. So, when we think, or, let me explain, what types of
337
00:27:09,960 --> 00:27:18,360
sensitive data are companies most worried about leaking, yeah, into AI tools and which role
338
00:27:19,080 --> 00:27:27,720
plays labeling and classification in this part? Yeah, it can really vary, it can really vary by,
339
00:27:27,720 --> 00:27:34,200
by industry. I mean, obviously, if you're, you know, if you're a healthcare organization or
340
00:27:34,200 --> 00:27:39,080
a hospital clinic or something like that, of course, then HIPAA comes into play and of course,
341
00:27:39,080 --> 00:27:45,880
you can do DLP policies based on that. But what I tend to like to do is I tend to like to combine
342
00:27:45,880 --> 00:27:50,760
them a little bit. So, what I'll do is I'll do an auto labeling policy against a particular
343
00:27:50,760 --> 00:27:57,400
sensitivity type. I'm not a fan of letting the users apply the labels. So, I like to do auto labeling,
344
00:27:57,400 --> 00:28:05,160
which of course does require an E5 or better. But, so applying a label to a particular sensitivity type,
345
00:28:05,160 --> 00:28:13,320
and then using, and then using a, either a DSPM policy, which writes basically a DLP policy,
346
00:28:13,320 --> 00:28:19,000
this is if this label is attached to this, then co-pilot's off limits. And it won't even summarize a
347
00:28:19,000 --> 00:28:27,720
document in there. I mean, it's completely off limits. Now, I, the areas that folks are probably
348
00:28:27,720 --> 00:28:35,560
the most sensitive, it depends. I've had an organization that were mostly PCI sensitivity type, right?
349
00:28:35,560 --> 00:28:42,920
So, PCI and even magnetic stripe, which means you usually have to build a custom sensitivity type,
350
00:28:43,320 --> 00:28:51,080
if you do it a magnetic stripe, not just PCI, you know, a hospital could be, you know,
351
00:28:51,080 --> 00:28:57,720
any kind of medical information, and then usually proximity. So, it may not be, oh, this person
352
00:28:57,720 --> 00:29:05,560
has got this disease, but what's the proximity between the disease name and that person's name,
353
00:29:05,560 --> 00:29:12,920
right? So, to put it together, that may be a situation there. Financial institute, I'll tell you,
354
00:29:12,920 --> 00:29:16,600
you know what, you wonder, you wonder, one of the biggest ones, especially in financial institutions
355
00:29:16,600 --> 00:29:24,200
that I'm finding is, is teams meeting transcripts. That's probably the number one thing people are
356
00:29:24,200 --> 00:29:32,040
paranoid about is transcription, and, and the AI notes from facilitator agent. That's probably the,
357
00:29:32,040 --> 00:29:38,680
I've had more meetings with legal teams on those topics than any other area there. Yeah, they're
358
00:29:38,680 --> 00:29:42,520
worried about credit cards and social security numbers and customer account, maybe, but really,
359
00:29:42,520 --> 00:29:49,160
those are the areas that I tend to see the most focus on right now. And, and there's, and perfectly
360
00:29:49,160 --> 00:29:57,320
it has a really good solutions to help clean that up quickly, relatively quickly. If somebody's worried
361
00:29:57,320 --> 00:30:05,000
about that, even if there's a retention policy, so, and that's, that's priority cleanup, that's,
362
00:30:05,000 --> 00:30:13,800
instant preview right now, but that shortens the life of a legal hold and a retention policy
363
00:30:13,800 --> 00:30:19,640
overrides those. And I've used it, and that's a, that's a good way to give the lawyers a warm and
364
00:30:19,640 --> 00:30:24,680
fuzzy that you can clean up metadata from the meeting pretty quickly, even if it's under
365
00:30:24,680 --> 00:30:34,440
litigation hold or under retention policy. That's so, so good, it's practical tips, but when you,
366
00:30:34,440 --> 00:30:43,640
when we think what's the first thing organization should do, assess before enabling co-pilot or
367
00:30:43,640 --> 00:30:51,720
co-pilot broadly, I say. Yeah, it really does come down to, it really comes down to data. So,
368
00:30:51,720 --> 00:30:57,720
I do an assessment, you know, of course I do a technical assessment workshop, and it's really
369
00:30:57,720 --> 00:31:03,480
straight from Microsoft has it on their adoption site. Basically, if they're technically ready,
370
00:31:03,480 --> 00:31:11,320
right, the licensing and all of that, but from a, from a practical standpoint, it's, you know, it's,
371
00:31:11,320 --> 00:31:18,280
you know, making sure that you have some DLP policies in place, you're governing your sensitive data,
372
00:31:19,960 --> 00:31:26,200
checking SharePoint permissions, things like that. I mean, really, that's, it's, it's kind of some tedious
373
00:31:26,200 --> 00:31:33,160
work because Microsoft is not the best at doing permissions reporting for SharePoint. So, sometimes
374
00:31:33,160 --> 00:31:38,760
some scripting may be involved or some third party products, even, which, you know, but,
375
00:31:38,760 --> 00:31:45,800
yeah, it ultimately, that's, that's the, that's the first thing we, we start talking about is,
376
00:31:45,800 --> 00:31:52,120
you know, what you're sharing policy look like, right? So, if we're allowing anonymous access,
377
00:31:52,120 --> 00:31:57,000
if we're doing anyone links, right, then we might have a problem, we might need to scale that back,
378
00:31:57,000 --> 00:32:03,960
just a little bit. But, but even anyone links are not active until somebody is either shared
379
00:32:03,960 --> 00:32:08,440
through email or somebody is actually clicked on the link, it doesn't automatically give them rights,
380
00:32:08,440 --> 00:32:13,640
it does automatically give co-pilot rights either. So, so just not as scary as it sounds,
381
00:32:14,360 --> 00:32:21,960
but again, it does come down to, it does come down to, you know, what's our retention look like,
382
00:32:21,960 --> 00:32:28,280
because if we have it over, if we're over, if we're not doing any retention policies and whether
383
00:32:28,280 --> 00:32:33,880
it's email or, or SharePoint or Teams or any of that, then that just increases your surface that
384
00:32:33,880 --> 00:32:40,040
co-pilot can go after as well. So, the more data that's accessible to co-pilot, so look at retention
385
00:32:40,040 --> 00:32:45,240
policies, look at your labels and classification and what's we're allowing to be shared
386
00:32:45,240 --> 00:32:53,800
probably the biggest things. I think a little bit about, I have done for half year or so opposed,
387
00:32:53,800 --> 00:33:01,880
and it's more about some risk, I think, that co-pilot can bring us for i-models,
388
00:33:01,880 --> 00:33:09,800
and then I get the, I get the comment co-pilot only shows users what you already have access to,
389
00:33:09,800 --> 00:33:18,200
and I say, yeah, okay, that's could be technical true, but I think if you have done your governance
390
00:33:18,200 --> 00:33:27,400
compliance and sensitivity labeling and so on that, then it's, yeah, it's, it's like, yeah,
391
00:33:27,400 --> 00:33:36,040
I don't know, it's like the sound would be come, come, come, you know, what, what things you, is,
392
00:33:36,040 --> 00:33:43,720
is it dangerous to roll out co-pilot or is it, can every company do it?
393
00:33:43,720 --> 00:33:59,800
Yeah, it's, I do like that, I mean bad actors are using AI as well, right, so we know that,
394
00:33:59,800 --> 00:34:10,600
I do like the fact that Microsoft is, is adding e5 to e5, they're adding the, the co-pilot security to help
395
00:34:10,600 --> 00:34:15,320
automate some of those controls, and they're putting some agents, there's actually, I just noticed
396
00:34:15,320 --> 00:34:22,040
even inside a purview, they're under purview and under agents inside a purview, you can go in,
397
00:34:22,040 --> 00:34:28,360
they've got two new under preview triage agents, so they'll triage your insider risk and triage your
398
00:34:28,360 --> 00:34:37,480
DLP, these are, these are, you know, outside of co-pilot security, so they're, it's funny because we were
399
00:34:37,480 --> 00:34:42,520
talking about third party products earlier, I had, I had a company tell me, yeah, we're developing a
400
00:34:42,520 --> 00:34:47,000
product that will go into purview and do triage, that's interesting because Microsoft's already got two
401
00:34:47,000 --> 00:34:51,480
of them, so I mean, it's that kind of thing where people just don't know, and so there's been a lot
402
00:34:51,480 --> 00:34:55,560
of time on products that Microsoft, way ahead of the game on some of that stuff, so,
403
00:34:57,640 --> 00:35:06,760
so yeah, it's, but yeah, it definitely, you know, it comes down to, you know, it comes down to,
404
00:35:06,760 --> 00:35:11,160
you know, what your, you know, what's your data posture, looks like, which security posture,
405
00:35:11,160 --> 00:35:21,400
it looks like, but I encourage every customer to, to, to, to pilot out co-pilot and use it because
406
00:35:21,400 --> 00:35:27,880
I'm telling you it, it can absolutely save the average user, if they understand how to use it,
407
00:35:27,880 --> 00:35:32,840
and I've done jump starts, I've developed jump start programs for companies to train their users,
408
00:35:32,840 --> 00:35:40,840
if they do it properly, then they can absolutely save a ton of time on their day. I probably save
409
00:35:40,840 --> 00:35:48,440
three hours a day, almost easily, just some of the writing and frameworks that stuff that I need to
410
00:35:48,440 --> 00:35:53,000
put together, quick PowerPoint presentation, and of course, don't anyway, you know, I mean,
411
00:35:53,000 --> 00:35:58,120
they co-pilot co-work, I mean, that's just, that's just sharing on top right there, so.
412
00:35:58,120 --> 00:36:07,640
And when we look a little bit, or I have a funny example, I work for a company and they have,
413
00:36:07,640 --> 00:36:14,920
they had the most using tools, um, chat point and teams, or, uh, teams, and I look a little bit about
414
00:36:14,920 --> 00:36:22,040
the data they generated, and the luck file for co-pilot was nearly so big like all the data they
415
00:36:22,040 --> 00:36:29,880
started in these, those files. Thank you, that's, um, it's a problem, or oversharing becomes a problem
416
00:36:29,880 --> 00:36:40,840
with it, so the co-pilot or AI, wait. Yeah, I mean, oversharing definitely can, um, you know,
417
00:36:40,840 --> 00:36:48,120
definitely can be a problem, but, you know, it's not, you know, it's a problem for the sake of
418
00:36:48,120 --> 00:36:54,280
oversharing, not even just the AI part of it. I mean, yeah, I mean, if you don't want somebody to,
419
00:36:54,280 --> 00:36:58,840
I mean, look, if you share something with somebody, you know, particularly internally,
420
00:36:58,840 --> 00:37:04,520
I mean, they've already got access to it anyways, a co-pilot can, um, you know,
421
00:37:06,040 --> 00:37:12,760
yeah, so, so the bigger problem is oversharing, for, for oversharing, say, not even necessarily from a,
422
00:37:12,760 --> 00:37:19,000
from an AI generation standpoint, although, um, yeah, if you don't, if you don't want things surfaced,
423
00:37:19,000 --> 00:37:27,320
you know, they definitely want to control that, um, the bigger risk though for me is people have
424
00:37:27,320 --> 00:37:31,960
access to things they may not know they have access to, so that's where the oversharing over
425
00:37:31,960 --> 00:37:39,320
permissioning can be a problem because if you throw a co-pilot in that, it, co-pilot is going to surface
426
00:37:39,320 --> 00:37:47,080
that, right? So, um, so yeah, it's a, it's a risk from the standpoint of you should, you should know
427
00:37:47,080 --> 00:37:54,040
who has access to what, um, and like I said, Microsoft's not great at really presenting this information
428
00:37:54,040 --> 00:37:58,840
very well. There are some good tools out there. We've used other third-party products like
429
00:37:58,840 --> 00:38:07,080
AppPoint or, or e-mail or something like that to, to surface who has access to what or write those
430
00:38:07,080 --> 00:38:12,120
reports for permissions reports, really, for any organization thinking about going to co-pilot,
431
00:38:12,120 --> 00:38:17,560
um, and launching any, either co-pilot or AI in their environment, the very first thing I would be
432
00:38:17,560 --> 00:38:23,240
doing is pulling a full sharing and permission report and see who has access to what. And if you
433
00:38:23,240 --> 00:38:29,880
don't like what you see, then you need to fix that first. Let's talk a little bit more about
434
00:38:29,880 --> 00:38:37,800
data loss prevention. How do CLP involve in AI first workplaces?
435
00:38:37,800 --> 00:38:49,560
Yeah, so now there is direct, um, there are direct controls within DLP that specifically
436
00:38:50,520 --> 00:38:58,280
tell co-pilot it's off limits or any AI, but it, it, it, words it co-pilot is off limits to that
437
00:38:58,280 --> 00:39:04,360
based on the parameters you just like any other control or condition that you would put on a DLP. So
438
00:39:04,360 --> 00:39:10,840
if it's this sensitivity time, if it's, you know, got, you know, this type of sensitive, or if it contains
439
00:39:10,840 --> 00:39:16,440
this label, which is where I use labels often, and then I'll just use a DLP that says if it has
440
00:39:16,440 --> 00:39:23,080
this label, and I make all the label restricted from AI, uh, and then that's deployed and then
441
00:39:23,080 --> 00:39:31,160
DLP then I can say if it contains this label, then there's a, there's basically a control that says
442
00:39:31,160 --> 00:39:37,880
exclude co-pilot from processing that data. And so, and I've done that with entire SharePoint site.
443
00:39:37,880 --> 00:39:43,880
So an entire site can be labeled, and because of that, I'll put a DLP on it that says if it has
444
00:39:43,880 --> 00:39:51,320
that label, it's co-pilot and no go. So they've, they've done a really good job at integrating
445
00:39:51,320 --> 00:39:57,080
DLP into AI controls and specifically around co-pilot. So
446
00:39:57,080 --> 00:40:10,520
Yeah. Um, you have this, uh, auto labeling function. Um, can you a little bit explain this, especially
447
00:40:10,520 --> 00:40:19,560
when we look at AI prompts and responses? Yeah. So, um, so what I do, what I, what I do is start with,
448
00:40:19,560 --> 00:40:25,320
um, start with a label. Now there's nothing particular special about this label. The label,
449
00:40:25,320 --> 00:40:31,160
you can create a label that has, you know, controls and permissions and watermarks and all of this.
450
00:40:31,160 --> 00:40:37,640
That's just standard traditional labeling. What I use label for with, with respect to regarding
451
00:40:37,640 --> 00:40:44,680
restricting co-pilot is I would create the label and it can be any label, um, and just call it,
452
00:40:44,680 --> 00:40:50,760
you know, and really that label has no controls over, uh, would built into it. It's just a label.
453
00:40:50,760 --> 00:40:55,480
It's just a tag, if you will. Then I'll go back and I'll create an all-only label policy that
454
00:40:55,480 --> 00:41:00,840
basically that applies to everybody. Everybody does all of this, right? It's, it's, it's deployed.
455
00:41:00,840 --> 00:41:08,760
Boom, it's out there. Um, but I'm going to apply it to, um, maybe a particular sharepoint site,
456
00:41:08,760 --> 00:41:14,680
or maybe a particular sensitivity type. That's where the auto apply comes in. Then I'll go and I'll
457
00:41:14,680 --> 00:41:20,600
create a DLP policies. This is where you see this label, co-pilot's off limits. And basically,
458
00:41:20,600 --> 00:41:26,600
it restricts it. So essentially that label shows up co-pilot says, oh, there's a label and it will not
459
00:41:26,600 --> 00:41:33,960
process. It's just, it's just kind of, yeah, it's game over for that, if you will. So, um, it
460
00:41:33,960 --> 00:41:40,760
works really well. I demo that live when I speak on securing co-pilot, um, inconsecitations for
461
00:41:40,760 --> 00:41:46,920
security, before you deploy co-pilot. Um, I demo that in real time, how that works and it works,
462
00:41:46,920 --> 00:41:53,640
very, very well. And of course, now with DSPM, you can do, you don't have to go to DLP to create
463
00:41:53,640 --> 00:41:58,920
the policy. You can just do it in a couple clicks and it's crazy. That same policy for you, but it's
464
00:41:58,920 --> 00:42:08,120
in DSPM now. I think a little bit, or how granular can our masks or shoulder,
465
00:42:08,120 --> 00:42:13,560
organization get with, yeah, DLP controls, on your perspective.
466
00:42:13,560 --> 00:42:22,680
Granular area of DLP? Yeah. Yeah. I mean, you can get down to the single item level.
467
00:42:22,680 --> 00:42:29,080
As a matter of fact, you can, well, and that's where, um, yeah, and that's where the labeling comes in
468
00:42:29,080 --> 00:42:34,920
because that labeling, that labeling can get really granular down to the, the, literally, you can tag
469
00:42:34,920 --> 00:42:43,720
a single document with that label if you want or a document library or SharePoint site, um, or anything
470
00:42:43,720 --> 00:42:51,800
that has keywords or any kind of red expression, right? So you can get so granular with that,
471
00:42:51,800 --> 00:43:00,760
uh, you know, you can, um, yeah, you can, um, anyway, so then from there, the DLP policy also can get
472
00:43:00,760 --> 00:43:07,240
granular by itself, but you can use that label as, as the, as a trigger for that DLP, essentially.
473
00:43:07,240 --> 00:43:14,680
And then, I mean, you can get into, now, now you can get into insider risk so that when that DLP
474
00:43:14,680 --> 00:43:19,560
policy gets triggered, it triggers insider risk. I mean, you see how this can just really go on and
475
00:43:19,560 --> 00:43:27,800
on so tightly integrated, um, that, um, it just, it just really works well. And that's one thing I like
476
00:43:27,800 --> 00:43:34,040
about insider risk is you can, you can trigger off of a DLP policy. So if this is triggered, do this.
477
00:43:34,040 --> 00:43:41,400
Now, uh, scale the risk level. So we, I briefly mentioned adaptive protection earlier.
478
00:43:41,400 --> 00:43:47,080
One of the things that you do, and that's part of insider risk, uses some of the components there,
479
00:43:47,080 --> 00:43:54,360
but basically what that does is says, um, the way I view it, no user, no two users are necessarily
480
00:43:54,360 --> 00:44:01,960
the same. No risks are the same and no day is the same. So what I do with say a DLP policy inside
481
00:44:01,960 --> 00:44:08,360
of risk and adaptive protections, I'll create one policy and in that policy, I'll have three rules.
482
00:44:09,320 --> 00:44:18,120
You know, three rules will say if their risk level is minimum, then maybe I'm just gonna give them a,
483
00:44:18,120 --> 00:44:24,680
you know, a policy tip, you know, hey, you might not want to do this, right? But their risk, they've
484
00:44:24,680 --> 00:44:29,480
not overshared. They've not over permissioned. They've not dropped labeled down and lowered the label
485
00:44:29,480 --> 00:44:38,680
a lot. So the, they're, they're risk level is not too high, but maybe that someone that's more moderate
486
00:44:38,680 --> 00:44:45,400
risk. When I have a rule two that basically says, you have a moderate risk. So we're going to,
487
00:44:45,400 --> 00:44:55,000
we're going to crank this up to policy tip, maybe, um, maybe block the action, but that's it.
488
00:44:55,000 --> 00:45:03,240
And then policy three is, you're, you have a history of risky behavior.
489
00:45:04,280 --> 00:45:09,800
So we're now going to cut off, we're not going to block access now. So that's, that's adaptive protection.
490
00:45:09,800 --> 00:45:15,320
It scales it, but it uses, but it works within DLP works within conditional access and all of that.
491
00:45:15,320 --> 00:45:23,000
So you can get, you can apply the rules based on the users historical risk level. And so, and it
492
00:45:23,000 --> 00:45:29,320
just uses an algorithm that Microsoft has baked into it. But, but that's how all of that kind of DLP
493
00:45:30,920 --> 00:45:35,080
inside a risk, all of that kind of works together. Um,
494
00:45:35,080 --> 00:45:42,760
whether we're talking co-pilot or not, but it certainly works from a co-pilot's perspective as well.
495
00:45:42,760 --> 00:45:49,480
Yeah, I'm sitting here in, in Europe, Stuttgart, and we have this wonderful A, A, A, A,
496
00:45:49,480 --> 00:45:59,800
Uect. And there's one part that's, it's auditing your systems. What role can play P you view, especially in,
497
00:45:59,800 --> 00:46:06,680
yeah, A, I see now, we just followed auditing. Yeah, yeah, that's a great one. So, yes, I'm from Air with C,
498
00:46:06,680 --> 00:46:17,720
the U Act. They, first of all, I advise customers to use the compliance manager, because in the
499
00:46:17,720 --> 00:46:23,240
compliance manager, you'll get the, you get the score, you'll get your, your compliance score, but
500
00:46:23,240 --> 00:46:29,320
then you'll also get, you also get all the actionable items that you can take in order to raise
501
00:46:29,320 --> 00:46:34,440
that score. And even with the A, U Act, and I've looked at it because I've talked about it when I,
502
00:46:34,440 --> 00:46:44,120
did a, I did a session for whales several years ago, um, whales, England, and in fact, a guy from
503
00:46:44,120 --> 00:46:51,720
theirs actually, who Microsoft employees who nominated me for my MVP. So, um, but it was, um,
504
00:46:51,720 --> 00:46:58,760
but even that it really comes down to, it still comes down to data. How are you protecting the
505
00:46:58,760 --> 00:47:04,200
data and how are you allowing it to traverse data center to data center and not, you know, leave the
506
00:47:04,200 --> 00:47:13,080
E boundary as they call it, right? So, um, but, um, but yeah, the, but yeah, the compliance manager,
507
00:47:13,080 --> 00:47:19,240
really gives you those actionable items. It tells you what Microsoft is doing on their end,
508
00:47:19,240 --> 00:47:23,880
what they're responsible for, and then it tells you what you know responsible for in order to
509
00:47:23,880 --> 00:47:29,800
move that score up. Um, and then of course you kind of sign those out to people and so forth. So,
510
00:47:29,800 --> 00:47:34,600
that's where, if you really want to kind of know where you stand from an industry level,
511
00:47:34,600 --> 00:47:39,800
compliance manager inside of purview is a really good spot.
512
00:47:39,800 --> 00:47:47,880
And I like to come a little bit back about topic. We start, yeah, talk, uh, yeah,
513
00:47:48,520 --> 00:47:54,920
sometimes in this discussion, it's inside a risk. And I think it's, this topic is old, I don't know,
514
00:47:54,920 --> 00:48:05,240
it's old like I start in IT for 25 years. Um, but why is, yeah, why it's inside a risk feeling
515
00:48:05,240 --> 00:48:14,520
especially relevant in, yeah, in the AI area and why? I'm sorry, why is what? Why is it? Yeah, inside
516
00:48:14,520 --> 00:48:20,460
risk was, I think it's a topic we have all the years in IQ
517
00:48:20,460 --> 00:48:24,760
But now we have this topic more on the I say on the tables
518
00:48:24,760 --> 00:48:30,320
and the management and so on, did you know why we get this
519
00:48:30,320 --> 00:48:34,280
old topic in now? Yeah, so prominent.
520
00:48:34,280 --> 00:48:41,000
Yeah, well, I think, well, I mean, I think like I said, I
521
00:48:41,000 --> 00:48:46,100
think the biggest the biggest risk is from inside whether it's
522
00:48:46,100 --> 00:48:49,700
lack of user understanding or it's, you know, there's some
523
00:48:49,700 --> 00:48:55,540
maliciousness to that as well, but I think, I think inside a risk
524
00:48:55,540 --> 00:49:03,700
is very, it's so prominent. And really, it's a big product. And
525
00:49:03,700 --> 00:49:06,500
so it's it's almost like purview in itself, purview is just a
526
00:49:06,500 --> 00:49:10,140
suite of solutions, right? And so inside a risk is part of
527
00:49:10,140 --> 00:49:13,700
that, but inside of risk also has several big components to
528
00:49:13,700 --> 00:49:17,260
adaptor protection. It integrates in the communication
529
00:49:17,260 --> 00:49:24,900
compliance integrates into DSPM, conditional access and DLP.
530
00:49:24,900 --> 00:49:29,860
And I mean, it's just it's just kind of woven throughout its own
531
00:49:29,860 --> 00:49:36,300
solution. But I think it's, you know, zero trust, when we
532
00:49:36,300 --> 00:49:39,420
talking about zero trust model is all about not trusting
533
00:49:39,440 --> 00:49:46,480
anybody, assume everything, everything is this come from a bad
534
00:49:46,480 --> 00:49:49,600
source and don't believe anything you hear, right? That's
535
00:49:49,600 --> 00:49:52,400
pretty much what zero trust is. I would tell people zero
536
00:49:52,400 --> 00:49:55,000
trust is I don't even trust my grandma if you were to send me an
537
00:49:55,000 --> 00:49:59,000
email, right? I don't even trusted it. So it's, you know, and I
538
00:49:59,000 --> 00:50:03,320
think that's where we kind of have to be because weather is
539
00:50:03,320 --> 00:50:07,040
intentional or not. And I think most of time it's not, but
540
00:50:08,480 --> 00:50:12,360
I do think we, you know, it is definitely an area that we need
541
00:50:12,360 --> 00:50:17,080
to focus on. An interesting thing, we mentioned communication
542
00:50:17,080 --> 00:50:20,880
compliance, communication compliance started out of supervision
543
00:50:20,880 --> 00:50:26,600
some years ago. Well, supervision was originally created to
544
00:50:26,600 --> 00:50:30,240
stop bullying in schools. That's what it was for. It was
545
00:50:30,240 --> 00:50:35,120
specifically for education, and it was to track conversations
546
00:50:35,120 --> 00:50:39,480
that could indicate there was some bullying going on in school. And
547
00:50:39,480 --> 00:50:42,560
it worked out so well, and it was successful that Microsoft
548
00:50:42,560 --> 00:50:48,800
moved it into the compliance and security center under so
549
00:50:48,800 --> 00:50:51,880
under supervision, but expanded a bit and now it's communication
550
00:50:51,880 --> 00:50:55,240
compliance is essentially supposed to be at some point I've
551
00:50:55,240 --> 00:50:59,000
heard that it's going to be rebranded to communication DLP,
552
00:50:59,000 --> 00:51:02,200
because essentially what it is, it's tracking conversations in
553
00:51:02,200 --> 00:51:07,480
teams and even text messages if you plug in the right hooks in
554
00:51:07,480 --> 00:51:11,880
there. But anyway, but a lot of things are coming back to
555
00:51:11,880 --> 00:51:16,320
communication compliance, a lot of a lot of policies are kind of
556
00:51:16,320 --> 00:51:19,040
integrated inside of risk is is no different. So
557
00:51:19,040 --> 00:51:30,840
have we some new insider with brings through generative AI
558
00:51:30,840 --> 00:51:39,960
or I overall or is it only an old topic with I don't know new
559
00:51:39,960 --> 00:51:41,080
new color on.
560
00:51:41,080 --> 00:51:49,400
Yeah, I mean, there's insider. So insider risk uses, if I
561
00:51:49,400 --> 00:51:52,440
understand the question, insider risk uses is, you know,
562
00:51:52,440 --> 00:51:56,200
uses some algorithm sort of built in machine learning and AI
563
00:51:56,200 --> 00:52:00,800
built into insider risk. That's how it because it's scaled
564
00:52:00,800 --> 00:52:05,080
to the risk level and there's, you can control some of that, but
565
00:52:05,080 --> 00:52:09,120
basically the recommendation is used by Microsoft recommendation
566
00:52:09,120 --> 00:52:13,000
for its for its thresholds and all of that because it's
567
00:52:13,000 --> 00:52:17,160
basically using, and I can't even tell you I'm not even sure, but
568
00:52:17,160 --> 00:52:21,520
it's using some sort of AI built into it and machine learning. But
569
00:52:21,520 --> 00:52:25,920
look, Microsoft is not new to AI. It's been using AI for
570
00:52:25,920 --> 00:52:30,800
decades, think about the focused mailbox, right? That's in
571
00:52:30,800 --> 00:52:34,640
outlook that is using some version of an AI to determine a
572
00:52:34,640 --> 00:52:38,200
male that should be focused for you or male that's maybe not so
573
00:52:38,200 --> 00:52:43,200
interesting to you, right? So that's kind of an AI. The, the
574
00:52:43,200 --> 00:52:47,120
natural language countering in the classic outlook, you can go
575
00:52:47,120 --> 00:52:51,000
and type, I want this meeting to be on the third Wednesday of
576
00:52:51,000 --> 00:52:55,360
March of 27. And it'll know exactly what day that is you could
577
00:52:55,360 --> 00:53:00,160
type that in the address field or in the date field. Can't do it
578
00:53:00,160 --> 00:53:05,240
in the new outlook, yeah, but anyway, I always like that. The
579
00:53:05,240 --> 00:53:09,200
fine timer, right? The fine time and the pick a time inside of
580
00:53:09,200 --> 00:53:17,800
inside of outlook. And then the designer inside of inside of
581
00:53:17,800 --> 00:53:21,880
PowerPoint is all part of AI. Of course, now it is combined with
582
00:53:21,880 --> 00:53:25,680
so pilot, but even in the early days, it was not. It was just part of its own
583
00:53:25,680 --> 00:53:28,480
thing. So it was nothing really new and inside of reals, because
584
00:53:28,480 --> 00:53:31,880
pretty char, charc full of AI capabilities and machine learning.
585
00:53:31,880 --> 00:53:38,240
Awesome. This was really interesting. So now we come to by one of
586
00:53:38,240 --> 00:53:47,560
my favorite parts. It's the hot takes. And I have looked some
587
00:53:48,600 --> 00:53:54,520
headlines or discussions on LinkedIn. And there it's, yeah, you
588
00:53:54,520 --> 00:54:00,400
can give a short answer. AI governance is currently more marketing
589
00:54:00,400 --> 00:54:02,880
than reality. agree or disagree?
590
00:54:02,880 --> 00:54:10,440
Yeah, probably. I would agree. I think it's, yes, it's a good buzzword.
591
00:54:10,440 --> 00:54:16,960
Most organizations are enabling co-pilot too early without fixing
592
00:54:16,960 --> 00:54:22,520
permissions first. I'm sorry, didn't hear that. Most organizations
593
00:54:22,520 --> 00:54:27,480
are enabling the co pilot to early without fixing permissions
594
00:54:27,480 --> 00:54:31,760
first. Yeah, I would say that's probably been my experience,
595
00:54:31,760 --> 00:54:37,800
yeah. And sensitive alerts are finally getting the attention
596
00:54:37,800 --> 00:54:47,400
they deserve years ago. Yes, for sure. And let's now one peer view
597
00:54:47,400 --> 00:54:52,800
future, every company shows an apple or understand tomorrow.
598
00:54:52,800 --> 00:55:00,440
One per per view feature that every organization should
599
00:55:00,440 --> 00:55:08,200
understand. I, I would say, I would say labeling and classification.
600
00:55:08,200 --> 00:55:13,640
I would say that's, I would say understanding what documents you have
601
00:55:13,640 --> 00:55:17,360
in your environment and classify them accordingly would be key.
602
00:55:17,360 --> 00:55:23,000
And one governance mistake everyone makes.
603
00:55:29,440 --> 00:55:33,880
For me, it's, for me, it's, I actually wrote an article on this called
604
00:55:33,880 --> 00:55:38,600
lockdown and leaking out. I think it's over tightening the environment
605
00:55:38,600 --> 00:55:41,920
because I think when you do that, you're forcing shadow IT.
606
00:55:41,920 --> 00:55:48,280
So when, when you just have control, if you don't, so to summarize,
607
00:55:48,280 --> 00:55:53,600
I would say not thinking through what they're trying to do rather
608
00:55:53,600 --> 00:55:58,720
than just control, then you're going to force people to do the
609
00:55:58,720 --> 00:56:07,240
wrong thing. Yeah. Next question. And before I say, I do the 365
610
00:56:07,240 --> 00:56:10,920
contact net conference, but what's your favorite Microsoft conference?
611
00:56:10,920 --> 00:56:18,240
Oh, Microsoft concept would be ignite public conference.
612
00:56:18,240 --> 00:56:23,120
My favorite actual conference that I'm involved in is the MVP summit
613
00:56:23,120 --> 00:56:26,840
because it's, it's a great time for all of the MVP's to get together
614
00:56:26,840 --> 00:56:30,400
and share and learn new things that are that are around the corner.
615
00:56:30,400 --> 00:56:35,680
It's all under NDA, so we can't really talk about it other than it's a great
616
00:56:35,680 --> 00:56:40,600
time. But honestly, beyond that, I really enjoy just getting together
617
00:56:40,600 --> 00:56:43,760
with community conferences as well. And I speak at a lot of those,
618
00:56:43,760 --> 00:56:49,320
who actually speaking at two of them next month. So really, really
619
00:56:49,320 --> 00:56:55,160
enjoy the, the community conferences tech on in some of those as well.
620
00:56:55,680 --> 00:57:01,000
And where can people follow your work sessions and content best?
621
00:57:01,000 --> 00:57:07,640
Yeah. So you can look me up on YouTube. I just finished a four part series
622
00:57:07,640 --> 00:57:12,800
on privilege identity management. So I did a four, four part series on that
623
00:57:12,800 --> 00:57:17,600
that's on YouTube. Alan Neesys Alan Cox MVP on YouTube.
624
00:57:17,600 --> 00:57:21,960
And then first, uh, Linton as well, Alan Cox MVP there.
625
00:57:21,960 --> 00:57:26,680
And I do a weekly neat newsletter there where I kind of, it's kind of my digital
626
00:57:26,680 --> 00:57:33,640
365 digest more written article newsletter. And then the, um, and then the
627
00:57:33,640 --> 00:57:37,240
YouTube is more, um, you know, of course, video content. So yep.
628
00:57:37,240 --> 00:57:43,160
Yeah. Then, uh, yeah, I can say we put all the links in the show notes for the
629
00:57:43,160 --> 00:57:51,920
people. And, um, yeah. So, uh, a final, yeah, advise for, I don't know,
630
00:57:51,920 --> 00:57:56,800
for the listeners, um, you can give, or what is the key point,
631
00:57:56,800 --> 00:58:00,400
they should take away from this session today?
632
00:58:00,400 --> 00:58:04,040
Yeah. I think the main thing is don't be afraid of, um,
633
00:58:04,040 --> 00:58:08,680
purview, don't be afraid of, um, governance, purt, you know, and, you know,
634
00:58:08,680 --> 00:58:14,720
and, and look at it from a, not from a control standpoint or just, well,
635
00:58:14,720 --> 00:58:18,560
we got a policy because we have to have policies. But what do we actually
636
00:58:18,560 --> 00:58:24,880
trying to achieve? What do we, what's the real goal? My summary when it comes to
637
00:58:24,880 --> 00:58:30,080
governance is make it easy for the user to do the right thing. Because if you make
638
00:58:30,080 --> 00:58:35,680
things too hard and too complicated and too many controls, they're going to go
639
00:58:35,680 --> 00:58:40,160
outside the confines of your boundary, your security bubble, and they're
640
00:58:40,160 --> 00:58:43,520
going to introduce shadow IT. And that's where the risk really lies.
641
00:58:44,320 --> 00:58:50,080
Yeah, awesome. So then I say thank you for spending nearly an hour with me here,
642
00:58:50,080 --> 00:58:57,600
and, yeah, share. And, yeah, I think the people showed, uh, watch, uh, your,
643
00:58:57,600 --> 00:59:02,720
your channels, especially, uh, I like your YouTube channel. And yeah, so thank you so
644
00:59:02,720 --> 00:59:08,720
much for your time. Yeah, I appreciate it. Thank you.

Founder of m365.fm, m365.show and m365con.net
Mirko Peters is a Microsoft 365 expert, content creator, and founder of m365.fm, a platform dedicated to sharing practical insights on modern workplace technologies. His work focuses on Microsoft 365 governance, security, collaboration, and real-world implementation strategies.
Through his podcast and written content, Mirko provides hands-on guidance for IT professionals, architects, and business leaders navigating the complexities of Microsoft 365. He is known for translating complex topics into clear, actionable advice, often highlighting common mistakes and overlooked risks in real-world environments.
With a strong emphasis on community contribution and knowledge sharing, Mirko is actively building a platform that connects experts, shares experiences, and helps organizations get the most out of their Microsoft 365 investments.

Microsoft MVP | Udemy Instructor | M365 Evangelist
IT Leader, Instructor, and Conference Speaker with decades of experience in Microsoft solutions, currently serving as Director of Microsoft Governance and Microsoft MVP for M365 & Copilot. I'm passionate about leveraging technology to empower organizations, enhance productivity, and promote secure digital transformation. Dedicated to aligning innovative solutions with organizational goals to deliver impactful outcomes.

![Protecting Microsoft Copilot with Purview, DLP & Insider Risk with Alan Cox [MVP] Protecting Microsoft Copilot with Purview, DLP & Insider Risk with Alan Cox [MVP]](https://img.youtube.com/vi/uJ3OP1FBkfM/maxresdefault.jpg)





