Nudge Security gives you visibility into AI adoption across your organization, helps you discover and assess AI apps as they appear, and provides tools to build a governance framework that balances security with productivity.
Prerequisites: Complete the Start Here setup guides first. This guide builds heavily on approval statuses, rules, and the browser extension.
Day 1: What you can do right now
What to do | Where to do it | Why it matters |
Review the AI dashboard | Dashboards > AI > Apps | See which AI tools are in use, how many employees use AI tools, and AI apps with access to sensitive data through OAuth grants. |
Set approval statuses on AI apps | Apps (filter by AI category) | Decide which AI tools are approved, which aren't, and which need review. |
Mark your approved AI tools | Individual app records | If you've standardized on a specific AI tool (e.g., Gemini, Glean, Copilot), mark it as Approved. This is critical for redirect rules and browser nudges. |
Mark not-permitted AI tools | Individual app records | Common targets: ChatGPT/OpenAI (if you have an alternative), DeepSeek, and other AI tools with data handling concerns. |
Set up redirect rules | Automations > Rules | Create Account rules: when someone signs up for a not-permitted AI app, nudge them toward your approved alternative. (See Set up rules and alerts, Rule 2.) |
Enable the browser nudge for not-permitted apps | Settings > Browser Extension | When someone visits the login or signup page of a not-permitted AI tool, the extension shows a prompt with your approved alternative. |
What's next: Build on your foundation
Monitor AI tool adoption and trends
The AI dashboard (Dashboards > AI > Apps) is your central view. It shows:
AI app adoption trend – Charts showing AI tool adoption over time.
AI account users – Shows which users are the heaviest AI tool consumers.
AI acceptable use policy status – Displays the status of your AI acceptable use policy acceptance rate across all AI users.
Organizational Breakdowns – Breaks down AI usage by department, organizational unit, etc.
AI supply chain – Shows how many of your SaaS vendors use AI in their infrastructure.
Risky AI conversations – Highlights potentially risky interactions with AI chatbots.
Recent MCP server connections – Displays recent OAuth grants related to Model Context Protocol (MCP) server integrations.
Top data sources for AI conversations – Visualizes how data moves (copy and paste/file upload actions) through AI tools in your environment.
AI chatbot activity – Details chatbot usage patterns and any sensitive data exposure.
AI apps with access to sensitive data – Identifies AI applications that have access to sensitive data scopes.
Check this dashboard regularly (weekly is a good cadence) to stay ahead of new AI adoption.
Enable AI conversation monitoring
With the browser extension deployed, you can monitor AI conversations. Go to Settings > Browser Extension and configure:
Usage tracking (default). See which AI tools people are actively using and how often. This gives you adoption data without looking at content.
Sensitive data detection. Enable this to flag when someone submits sensitive information—like credit card numbers, social security numbers, or proprietary data—into an AI prompt. Nudge Security detects the sensitive data and alerts you.
Prompt retention. If you need to review the full content of flagged prompts (for compliance or incident response), you can enable prompt retention. This stores the full prompt text when sensitive data is detected.
Most organizations start with usage tracking, move to sensitive data detection once they're comfortable, and only enable prompt retention if compliance or policy requires it.
Review AI conversations per user
For individual users, go to Identities > Users, select the user, and look at the AI Conversations section. This shows:
Which AI tools they've used.
When conversations took place.
Whether any copy/paste or file upload activity was detected.
Whether sensitive data was flagged (if sensitive data detection is enabled).
This is useful for targeted follow-up - if a user is consistently using a not-permitted AI tool, you can have a conversation with them or check whether they've received and responded to nudges.
See where corporate data is flowing into AI tools
The AI Apps Dashboard includes a data flow visualization that maps file uploads and copy/paste actions from your SaaS applications directly to AI conversation tools. This shows you not just who's using AI - but where the data they're putting into AI tools is coming from.
Go to Dashboards > AI > Apps to see the data flow chart. You can:
Spot high-volume data movements. The chart shows which source apps (Google Drive, Salesforce, your internal tools) are feeding the most data into which AI tools. A heavy flow from your CRM into ChatGPT is a different risk profile than someone pasting text from a note-taking app.
View data volume, file count, and user count. Understand whether the risk is concentrated (one person uploading a lot) or distributed (many people doing it casually).
Identify which AI tools are consuming the most corporate data. If one AI tool is receiving significantly more data than others, that's the tool where your governance effort - AUP enforcement, sensitive data detection, approval status decision - has the highest payoff.
This view is especially useful for scoping your AI governance priorities. Rather than treating all AI tools equally, you can focus your attention on the tools and data flows that represent the most real-world risk.
The browser extension is required to populate this chart - it tracks copy/paste and file upload activity between apps. If you haven't deployed the extension yet, see Deploy the browser extension.
Set up your AI acceptable use policy
Most organizations have employees review AI policies at hire and again during annual training - but by the time someone is sitting in front of an AI tool months later, they've forgotten what the policy said. Nudge Security's AUP playbook delivers your policy at the moment it matters: when an employee is actually accessing an AI tool.
See Set up your AI acceptable use policy for the full walkthrough - how to create your policy, configure delivery via the browser extension and email/Slack/Teams nudges, re-deliver after policy updates, and track acknowledgment across your workforce.
Build a sustainable AI governance framework
AI adoption moves fast, and a one-time cleanup won't keep up. The goal is a governance framework that scales as new AI tools appear:
Define your AI policy. Decide and document your organization's position: which AI tools are approved, what data can be shared with them, and what monitoring is in place. This doesn't need to be complicated - even a short policy gives employees clarity and gives you a reference point for enforcement.
Categorize AI tools by risk tier. Not all AI tools carry the same risk. A standalone image generator is different from a tool that connects to your email and files. Consider creating tiers: general-purpose AI (highest data risk), domain-specific AI (moderate risk), and AI features embedded in already-approved tools (usually lower incremental risk). Apply different governance policies to each tier.
Monitor the AI supply chain and data flow. AI tools often rely on third-party models and data pipelines. Check the OAuth/Integrations section for AI apps to understand where data flows. An AI tool with OAuth grants to your Google Workspace is a supply chain risk worth understanding - see Map and reduce your SaaS attack surface for how to evaluate third-party data access.
Review and adjust regularly. AI is evolving fast. Set a monthly cadence to review the AI dashboard for new tools, check if your approved/not-permitted decisions still make sense, and update your governance rules as needed. What was a fringe AI tool last month may be widely adopted this month.
AI App Discovery & Risk Assessment
When a new AI app appears in your environment, you need to quickly assess whether it's worth approving, restricting, or watching. Here's a structured approach:
Check the app record. Look at who introduced the app and whether they've responded to your automated request clarification of use nudge
Review the Security profile. Does the vendor have SOC 2 or other relevant certifications? What does their data handling policy say about training on customer data?
Check OAuth grants. Has the AI tool been granted access to files, email, or other sensitive data? AI apps with broad OAuth grants are especially risky since they can ingest sensitive information programmatically, not just through user prompts.
Assess the user footprint. Is this one person experimenting, or has the tool already spread to multiple teams?
Make a decision. Approve, restrict, or redirect - and set the approval status so your rules enforce it going forward.
Risk signals for AI apps
AI apps have some risk factors that don't apply to other SaaS categories:
Data training policies. Does the vendor use customer data to train their models?
Prompt data retention. How long does the vendor retain prompt data, and who has access to it?
OAuth scope creep. AI tools increasingly request broad permissions (email, files, calendar) to power "context-aware" features. These grants may be disproportionate to the value the tool provides.
Embedded AI in existing tools. Some approved tools are adding AI features that change their data handling. An app you approved before it had AI capabilities may now be sending data to third-party model providers.
Key features for this use case
Feature | Where to find it | What it does |
AI dashboard | Dashboards > AI > Apps | AI app discovery, adoption tracking, daily active users, and new AI tool introductions. |
AI conversation monitoring | Settings > Browser Extension | Usage tracking, sensitive data detection, and optional prompt retention for AI tools. |
User AI conversations | Identities > Users > individual profiles > AI Conversations | Per-user AI tool usage, conversation logs, and sensitive data flags. |
App security profiles | Apps > individual app records > Security | Vendor certifications and data handling policies - important context for AI app risk assessment. |
OAuth grant details | Apps > individual records > OAuth/Integrations | What data AI tools can access programmatically - a key AI supply chain risk factor. |
Automations > Rules | Nudge users away from not-permitted AI tools and toward your approved alternative. | |
Browser nudges | Settings > Browser Extension | Real-time prompts on login/signup pages of not-permitted AI apps. |
Approval statuses | Apps or Individual app records | The foundation for all AI governance - approved vs. not permitted. |


