Securing AI Coding Tools: Permission Controls and Credential Protection for Engineering Teams
AI coding tools handle security, data retention, and training data differently. This guide helps you quickly find tool-specific security configurations, compare approaches across Claude Code, GitHub Copilot, Cursor, Codex CLI, and Roo Code, and access official policy documentation.
What if your team could use AI coding assistants without worrying about exposing customer data or credentials? What if each developer could configure what their AI tool can access (files, commands, network requests)? What if you could trust that AI-generated code never included your API keys or database passwords?
Each tool handles security differently. Different permission systems, different data retention policies, different training data usage. Understanding these differences is the first step to protecting your team's code and credentials.
Without proper security configuration, your team might face serious problems:
- Credential exposure: AI tools read files containing API keys and database passwords, then potentially include them in generated code or send to model providers
- Unrestricted file access: No controls on what gets indexed (customer data, production configs, HR documents)
- Inconsistent security practices: Claude Code offers file, command, and network permissions. GitHub Copilot uses organization-wide content exclusions. Cursor relies on Privacy Mode and
.cursorignorefiles. Codex CLI provides workspace sandboxing. Roo Code uses.rooignorepatterns. Without understanding these differences, teams end up with gaps in security coverage
The first step is understanding how each tool approaches security, what policies govern data retention and training data usage, and where to find official documentation. This post serves as a quick reference guide to help you navigate these differences and configure the right protections for your team.
This post is organized as both a guide and a reference. Start with the comparison table to identify which tool matches your needs. Then dive into tool-specific sections for detailed security configurations, data retention policies, and links to official documentation. I'll also cover credential protection patterns that apply across all tools, team security standards with AGENTS.md, and critical vulnerabilities to address immediately.
Why AI Tool Security Matters More Than You Think
AI tools need file access to be useful. They read your codebase to understand context, suggest relevant code, and generate implementations that match your patterns. That access creates a trust paradox: AI tools need access to help you, but unrestricted access creates exposure.
Here's the real-world scenario that makes this concrete:
A developer clones a new repository that includes a production config file with database credentials. Their AI tool immediately indexes the entire repo. The tool now "knows" those credentials. When the developer asks the AI to generate database connection code, it might suggest code that includes those actual credentials instead of environment variable placeholders.
Another scenario: Your team has test data files with customer PII (names, emails, phone numbers). An AI tool indexes these files. Later, when generating test fixtures, it suggests using actual customer data it saw in those files.
Permission-Based Security: Similar to mobile app permissions, AI coding tools can be configured with explicit allow/deny rules that control file access, command execution, and network requests.
The breakthrough insight: AI tools don't understand sensitive vs non-sensitive data. They see text. Your security configuration teaches them what's off-limits.
Team Benefit: Proactive security prevents credential leaks and data exposure before they happen. Configure once, protect continuously.
Security Features at a Glance
Before diving into each tool's specifics, here's how their security features compare. Use this table to identify which tool(s) match your team's needs, then read the relevant sections below for detailed configuration guidance.
| Feature | Claude Code | GitHub Copilot | Cursor | Codex CLI | Roo Code |
|---|---|---|---|---|---|
| Permission Granularity | High (file/command/network) | Medium (org-level policies) | Low (Privacy Mode only) | Medium (sandbox policies + approval modes) | Medium (file patterns + permission approval) |
| Credential Protection | Deny rules in settings.json | Organization content exclusions (web config) | .cursorignore + Privacy Mode | Network disabled by default, workspace isolation | .rooignore file + client-only architecture |
| Configuration Level | Per-developer | Organization-wide | Per-developer | Per-developer with enterprise options | Per-developer (workspace) |
| Data Retention | 30-day default (Enterprise configurable) | Prompts discarded immediately; no training on customer data | Privacy Mode: zero retention | Telemetry disabled by default | Immediate deletion after forwarding |
| Training Data Usage | Not used for training (Enterprise/API) | Not used for training (Business/Enterprise) | Privacy Mode: not used for training | Not used for training (can opt in) | Not used for training |
| Sandboxing | OS-level (filesystem + network) | None | None | Yes (OS-level: macOS Seatbelt, Linux Landlock/seccomp) | None (workspace-only isolation) |
| Best For | Teams needing granular control | Large orgs wanting centralized policies | Solo developers with Privacy Mode enabled | Teams needing terminal-based agents with strong isolation | Organizations requiring open-source auditability and self-hosting |
Claude Code provides the most granular control. You configure file access, command execution, and network requests individually. Best for teams with complex security requirements or strict compliance needs.
GitHub Copilot excels at organization-wide policy enforcement. Security teams configure policies centrally, and all developers inherit them automatically. Best for large organizations wanting consistent security without relying on individual configuration.
Cursor requires Privacy Mode enabled and Workspace Trust configured properly. Best for solo developers who carefully manage their security settings.
Codex CLI offers terminal-based autonomous agent capabilities with strong OS-level sandbox isolation. macOS Seatbelt and Linux Landlock/seccomp sandboxing prevent code from escaping the workspace. Best for teams wanting terminal-based AI agents to handle complex multi-step tasks without system-wide security risks.
Roo Code provides open-source transparency with self-hosting options. Teams can audit the codebase and deploy on-premises. Best for organizations requiring full visibility into tool behavior or regulated industries needing data to stay on-premises. Requires careful configuration of auto-approve features to minimize security risks.
The choice depends on your priorities: granular control (Claude Code), centralized management (GitHub Copilot), lightweight individual use (Cursor with proper configuration), terminal-based agents with isolation (Codex CLI), or open-source auditability (Roo Code).
The sections below provide detailed security configurations and policies for each tool. If you already know which tool your team uses, jump directly to that section. If you're evaluating tools, start with your highest-priority security feature from the table above.
Claude Code's Permission System—Granular Control
Claude Code provides the most granular permission system of the major AI coding tools. It uses a three-tier model in ~/.claude/settings.json with clear precedence rules.
The permission system works like this: deny rules block operations entirely, ask rules require confirmation, and allow rules permit automatically. When Claude Code encounters an operation, it checks permissions in this order: deny (highest precedence) > ask > allow.
Here's a basic security configuration:
{
"permissions": {
"deny": [
"Read(**/.env*)",
"Read(**/config/secrets/**)",
"Read(**/customer-data/**)",
"Bash(curl *)",
"Bash(wget *)"
],
"allow": [
"Bash(npm run test:*)",
"Bash(npm run build:*)",
"Read(src/**)",
"Read(tests/**)"
],
"ask": ["Bash(git push:*)", "Edit(/config/**)"]
}
}
Deny section: Critical files that should never be accessed. This blocks any .env files, secrets directories, and customer data. It also blocks network commands like curl and wget that could leak data externally.
Allow section: Safe operations that don't require confirmation. Running tests and builds is low-risk. Reading source code and tests is necessary for the AI to function effectively.
Ask section: Operations requiring manual approval. Git push changes the remote repository. Editing config files could break the application. Both deserve a confirmation prompt.
Deny Rules: Highest precedence permission that blocks operations entirely. Use deny rules to protect sensitive files like
.env, credentials, and customer data from any access.
You can configure tool-specific permissions for even finer control:
{
"permissions": {
"deny": ["Read(**/.env*)", "WebFetch(*)"],
"allow": [
"mcp__context7__resolve-library-id",
"WebFetch(domain:docs.npmjs.com)"
]
}
}
This configuration blocks all web fetches except to specific approved domains like npm documentation. The MCP tool is allowed without restrictions.
Training Data Policy: Claude Code does not use your code for training AI models when using Enterprise or API plans. For Team and Pro plans, code is not used for training by default. Learn more about training data policies
Why this matters: Claude Code's permission system gives you file-level, command-level, and network-level control. Other AI tools typically offer only file-level exclusions.
For more information:
- Excluding sensitive files
- Settings documentation
- Custom data retention for Enterprise
- Zero data retention explained
Team Benefit: Every developer can use Claude Code with identical security boundaries. New team members inherit the team's security configuration on day one.
GitHub Copilot—Organization-Level Security
GitHub Copilot takes a different approach: organization-wide policies rather than individual developer permissions. Enterprise administrators control what entire teams can access.
The advantage is centralized security. The security team configures policies once, and every developer using GitHub Copilot automatically inherits those protections. No need to ensure each developer configures their own settings correctly.
Content exclusions are configured through GitHub's web interface by organization administrators. This feature is only available to organizations with a Copilot Business or Copilot Enterprise plan.
To configure content exclusions, navigate to your organization settings: Settings > Copilot > Content exclusion. Here you can specify patterns for files that Copilot should ignore:
# Credentials and secrets
.env
.env.*
.env.local
config/secrets/
*.key
*.pem
# Customer data
customer-data/
user-uploads/
/data/production/
# Internal documentation
/docs/internal/
HR-docs/
These patterns use the same syntax as .gitignore: wildcards, directory matches, and file extension filters. Unlike other AI tools that use local ignore files, GitHub Copilot requires organization-level configuration through the web interface.
Content Exclusions: Patterns that tell GitHub Copilot which files to ignore when indexing your codebase. Similar to .gitignore preventing files from being committed to version control.
Key differences from Claude Code:
- No granular command permissions: GitHub Copilot doesn't offer control over which bash commands or network requests it can make
- No per-developer configuration: Security is organization-wide, not individually customizable
- Data retention varies by plan: Business plan doesn't retain code prompts; Enterprise plan offers stronger guarantees with zero data retention
Training Data Policy: GitHub Copilot Business and Enterprise plans do not use customer code for training AI models. Code snippets and prompts are processed in real-time but never stored or used to improve the model. Learn more about data handling
The organization-level approach means security teams can enforce policies across hundreds of developers without relying on individual configuration.
For more information:
- Content exclusion concepts
- Excluding content from GitHub Copilot
- GitHub Copilot Business privacy overview
Team Benefit: Centralized security policies scale across entire organization. Security team configures once; every developer inherits the same protections.
Cursor—Privacy Mode and Vulnerability Warnings
Cursor offers Privacy Mode as its primary security feature. When enabled, Privacy Mode guarantees zero data retention. Your code never gets stored or used for training.
You can enable Privacy Mode through Cursor's settings interface (typically found under Settings > General > Privacy Mode, though the exact path may vary by version).
For file exclusions, Cursor uses a .cursorignore file similar to GitHub Copilot's approach:
.env*
config/secrets/
customer-data/
*.key
*.pem
Place this in your repository root. Cursor will exclude these patterns when indexing your codebase.
Training Data Policy: When Privacy Mode is enabled, your code is never stored or used for training. In Share Data mode, code may be temporarily stored for performance optimization but is not used to train AI models. Learn more about data usage
Cursor includes Workspace Trust settings that control whether untrusted folders can automatically execute code when you open them. Review these settings (typically under Settings > Security > Workspace Trust) to ensure they align with your security requirements.
It's also recommended to review security settings after Cursor updates to ensure your configuration remains intact.
For more information:
Team Benefit: Security awareness prevents attacks from repositories that look legitimate but contain malicious startup scripts. Every team member learns to verify Workspace Trust before opening new repos.
Codex CLI—Terminal-Based Agent with Sandboxing Controls
OpenAI's Codex CLI is an open-source, terminal-based AI coding agent. Unlike code completion tools, Codex CLI performs complete tasks: writing features, running tests, and proposing pull requests. This autonomy requires robust security controls.
Codex CLI's primary security feature is sandboxed execution. Code runs in isolated environments on your local machine that prevent unauthorized access to your broader system.
Codex CLI uses OS-level sandboxing that varies by platform. On macOS and Linux, the sandboxing restricts filesystem and network access based on your selected security policy. Windows has more limited sandboxing capabilities, and WSL2 or Docker is recommended for stronger isolation on that platform.
Network access is disabled by default in the workspace-write sandbox policy. This prevents data exfiltration through external API calls or web requests. If you need network access, you must explicitly enable it in your configuration.
Sandboxing: Isolating code execution in a restricted environment that limits access to files, network, and system resources. Prevents malicious code from affecting your broader system.
Codex CLI provides two separate control systems:
Sandbox Policies (control what operations are allowed):
read-only: Restricts to read-only operationsworkspace-write: Permits modifications within workspace (default)danger-full-access: Removes filesystem restrictions entirely
Approval Modes (control when you're prompted):
untrusted: Requires approval for all commandson-failure: Pauses only when execution encounters errorson-request: Prompts based on task complexitynever: Runs commands without interruption
The --full-auto flag combines workspace-write sandbox with on-failure approvals for supervised local development.
OpenTelemetry support is opt-in and disabled by default. If enabled for monitoring, prompts are redacted by default (log_user_prompt = false) to prevent credential leaks in logs. Route telemetry only to collectors you control with appropriate retention limits and access controls.
Security considerations for Codex CLI:
Training data risks: Like all AI models, Codex CLI's underlying model training data may contain learned credentials from public repositories. Always review generated code for hardcoded secrets before running.
Prompt security: Secrets in your prompts become part of agent history and logs. Use environment variable references, never actual credential values, when asking Codex CLI to generate code.
Training Data Policy: OpenAI does not use business customer code for training by default. Organizations can opt in to share data for model improvement if desired. Note that Codex's existing training data may contain credentials from public repositories, so always review generated code. Learn more about training opt-in
Why this matters: Codex CLI's autonomous nature means it can complete entire workflows without human intervention. That power requires stronger isolation than simple code completion tools. OS-level sandboxing ensures tasks stay contained within your workspace even if the agent generates malicious code.
For more information:
- Codex CLI Security Guide
- Sandbox Implementation Details
- Codex CLI Documentation
- Official GitHub Repository
Team Benefit: Codex CLI handles complex multi-step tasks (feature implementation, test generation, refactoring) from your terminal while OS-level sandboxing ensures code can't escape the workspace. Teams get AI productivity without system-wide security risks.
Roo Code—Open-Source with Self-Hosting Options
Roo Code is an open-source AI coding assistant that operates as a VS Code extension. The key differentiator: it's fully open-source and auditable, making it suitable for organizations that need to review tool behavior before deployment.
Roo Code's security approach focuses on file exclusions and permission-based approval for all operations.
File exclusions work through a .rooignore file in your workspace root. The syntax is identical to .gitignore, making it familiar for developers:
.env*
*.key
*.pem
config/secrets/
customer-data/
Place this in your VS Code workspace root. Roo Code will exclude these patterns when accessing files. Files ignored by .rooignore are marked with a 🔒 lock symbol in file listings.
.rooignore: File exclusion configuration for Roo Code using .gitignore syntax. Controls access to files across multiple Roo tools (read, write, diffs, content insertion).
Permission-based security means all file changes and commands go through approval gates. Nothing runs without user confirmation. You can configure auto-approval settings for trusted operations, though this requires careful consideration given the vulnerabilities discussed below.
Training Data Policy: Roo Code explicitly states "We do not train any models on your data." Your code is deleted immediately after forwarding to your chosen AI provider. Only your selected provider's training policies apply. Privacy policy
Architecture advantages:
Client-only: Code stays on your machine unless explicitly sent to an AI provider. No code leaves your system by default.
API key storage: Your API keys are stored locally on your device, never sent to Roo Code servers. They're only transmitted to the AI provider you've selected (OpenAI, Anthropic, etc.).
Data retention: Roo Code states "We do not store your code; it is deleted immediately after forwarding." Code transits servers only to reach upstream AI providers, then is discarded.
Self-hosting: Enterprise teams can deploy Roo Code on-premises or use internal models with private inference APIs. Full open-source codebase allows security audits before deployment.
Security best practices for Roo Code:
Be aware that symlinks in your projects may bypass .rooignore file exclusion rules. Review symlink usage and ensure they don't point to sensitive directories.
When using MCP configurations, validate them before enabling, especially in untrusted repositories. Project-specific MCP configuration files should be reviewed carefully.
Auto-Approve: Feature that allows Roo Code to execute certain operations without user confirmation. Convenient but risky if enabled for file writes or package installation in untrusted projects.
Use auto-approve features with caution. Be particularly careful with auto-approve for npm install, as malicious dependencies could run arbitrary code during installation through postinstall scripts.
Why this matters: Roo Code's open-source nature allows security teams to audit the codebase and verify behavior before deployment. The self-hosting option is valuable for regulated industries requiring on-premises deployments.
For more information:
Team Benefit: Open-source auditability allows security teams to review code before deployment. Self-hosting options keep sensitive code on-premises for regulated industries. Teams gain transparency and control at the cost of more active security management.
Protecting Credentials Across All Tools
Every AI tool can accidentally expose credentials. The problem is universal. The solution requires multiple layers of protection working together.
Pattern 1: Use Exclusion Files Everywhere
Configure file exclusions in every tool and system:
- Claude Code: Deny rules in
~/.claude/settings.json - GitHub Copilot: Content exclusions via web interface (organization settings)
- Cursor:
.cursorignorein repository root - Roo Code:
.rooignorein workspace root - Git:
.gitignore(foundational layer that prevents credentials from being committed)
Each layer provides protection. .gitignore prevents commits. Tool-specific exclusions prevent indexing. GitHub Copilot uses organization-level web configuration instead of local files. Codex CLI uses workspace isolation rather than exclusion files.
Pattern 2: Standardize Credential File Patterns
Use consistent patterns across all your exclusion files:
.env*
*.key
*.pem
config/secrets/
credentials.json
auth-config.*
database-passwords.*
These patterns catch the most common credential storage locations. Adjust based on your team's specific patterns.
Pattern 3: Environment Variable Discipline
Never hardcode secrets in source files. Use environment variables exclusively.
Bad:
const apiKey = "sk-abc123xyz789"; // Never do this
Good:
const apiKey = process.env.API_KEY;
Document required environment variables in your README, but never document the values themselves:
## Required Environment Variables
- `API_KEY`: Your API key from the provider dashboard
- `DATABASE_URL`: PostgreSQL connection string
- `JWT_SECRET`: Random string for token signing
Environment Variables: Configuration values stored outside your source code, typically in
.envfiles that are never committed to version control. They keep secrets separate from code.
Establishing Team Security Standards with AGENTS.md
Here's the breakthrough: instead of fixing security issues after AI tools generate insecure code, teach your AI tools to generate secure code from the start.
That's what AGENTS.md does. It's a file that tells AI coding assistants how to generate code that meets your security standards.
Modern AI tools (Cursor, Claude Code, GitHub Copilot with workspace context) read AGENTS.md automatically from your project root. They use it to understand your team's security rules and generate code that follows them.
Create an AGENTS.md file in your repository root with a security section:
# Agent and LLM Rules
## Security Standards
### Files AI Tools Must Never Access
- .env, .env.\*, .env.local (credentials)
- config/secrets/ (API keys, tokens)
- customer-data/ (PII)
- /data/production/ (production data)
- HR-docs/ (employee information)
### Files AI Tools Can Read Freely
- src/ (application code)
- tests/ (test files)
- docs/public/ (public documentation)
### Commands AI Tools Must Request Permission For
- npm install (dependency changes)
- git push (remote updates)
- database migrations
- docker commands
### Credential Handling Rules
- Never hardcode API keys or passwords
- Use environment variables for all secrets
- Reference process.env.VARIABLE_NAME in code
- Document required environment variables in README
### When Generating Code
- Check that generated code doesn't include hardcoded credentials
- Use placeholder values like "YOUR_API_KEY_HERE" in examples
- Include comments directing developers to environment variables
How this works in practice:
- Claude Code reads AGENTS.md and follows security rules when generating code
- Cursor reads AGENTS.md and applies patterns to its suggestions
- GitHub Copilot with workspace context uses AGENTS.md for understanding your standards
- Codex CLI reads AGENTS.md from the project root when generating code
- Roo Code reads AGENTS.md and uses it to guide code generation
When a developer asks their AI tool to generate database connection code, the tool reads AGENTS.md, sees the credential handling rules, and generates code using process.env.DATABASE_URL instead of a hardcoded connection string.
Team Benefit: Your AI tools become security-aware team members. They learn to generate code that follows credential best practices instead of suggesting hardcoded secrets.
Risk Scenarios and Mitigation
Understanding what can go wrong helps you prioritize which security controls to implement first.
Scenario 1: AI Tool Indexes Entire Codebase Including Customer PII
Risk: Customer data gets used for AI training or leaked in code suggestions.
Example: Your test data includes customer names, emails, and phone numbers. The AI tool indexes these files. Later, when generating test fixtures, it suggests using actual customer data it saw.
Mitigation: Configure content exclusions in all AI tools. Keep customer data in clearly-marked directories like customer-data/ or /data/production/. Add these paths to your exclusion files.
Detection: Audit AI tool suggestions periodically. If you see real customer names or data in generated code, your exclusions aren't working.
Scenario 2: AI Suggests Code That Includes Credentials from Another File
Risk: Credentials from .env files appear in generated code suggestions.
Example: Developer asks AI to generate API client code. The AI suggests apiKey: "sk-abc123xyz" using the actual key it read from .env instead of apiKey: process.env.API_KEY.
Mitigation: Block .env access via deny rules in Claude Code or exclusion files in other tools.
Detection: Manual code review and AI tool exclusion verification.
Scenario 3: Team Member Clones Malicious Repo, AI Tool Auto-Executes Code
Risk: Workspace Trust disabled (Cursor-specific) allows automatic code execution when opening folders.
Example: Developer clones what appears to be a legitimate open-source library. The repo contains a hidden startup script. With Workspace Trust disabled, Cursor executes the script automatically, giving the attacker code execution.
Mitigation: Enable Workspace Trust in Cursor settings immediately. Review security settings after every Cursor update. Verify Workspace Trust is enabled before opening unknown repositories.
Detection: Manual verification before opening new repos. Check Workspace Trust status in settings.
Scenario 4: Different Tools Have Different Data Retention Policies
Risk: Some AI tools retain code and may use it for training; others guarantee no training use. Team members using different tools create inconsistent data handling.
Example: One developer uses GitHub Copilot Business (no code retention). Another uses Cursor without Privacy Mode enabled (code retained for training). Same codebase, different privacy guarantees.
Mitigation: Understand each tool's retention policy. Enable Privacy Mode where available (Cursor). For GitHub Copilot, ensure your organization uses Business or Enterprise plan with appropriate retention policies.
Detection: Review tool privacy settings and organizational agreements. Document which tools meet your security requirements.
Team Benefit: Shared security playbook prevents common mistakes. When incidents occur, the team has documented response procedures.
This Week's Security Action Items
Here's what your team should do this week to implement AI tool security:
Monday: Audit Current AI Tool Access
- List which AI tools your team uses (Claude Code, GitHub Copilot, Cursor, Codex, Roo Code, others)
- Check what files each tool can currently access
- Review permission configurations (or identify the lack of configuration)
- Document current state so you can measure improvement
- Update to the latest version of each AI tool and regularly review known vulnerabilities for the specific tools your team uses
Tuesday: Configure Credential Exclusions
- Create or update
.gitignorewith credential patterns - Configure content exclusions via organization settings for GitHub Copilot users (requires admin access)
- Add
.cursorignorefor Cursor users - Add
.rooignorefor Roo Code users - Configure deny rules in
~/.claude/settings.jsonfor Claude Code users - Commit exclusion files to your repository so the whole team benefits
Wednesday: Enable Security Features
- Cursor users: Enable Workspace Trust (Settings > Security > Workspace Trust > Enabled)
- Cursor users: Enable Privacy Mode (Settings > General > Privacy Mode)
- Roo Code users: Review auto-approve settings, especially for file writes in untrusted repositories
- GitHub Copilot admins: Review organization security policies
- All users: Verify security settings are actually working by testing if AI tools can read
.envfiles
Thursday: Document Security Standards in AGENTS.md
- Create AGENTS.md in repository root
- Add security section showing files AI tools must never access
- Define credential handling rules (use environment variables, never hardcode)
- List commands requiring approval (npm install, git push, migrations)
- Share with team and commit to repository
Friday: Team Review
- Each developer verifies their AI tool configuration matches team standards
- Test that credentials are actually blocked (try having AI tool read
.env, verify it fails) - Discuss any gaps in security coverage
- Plan next week's improvements based on team needs
Team Benefit: Systematic rollout ensures everyone adopts security controls without disrupting productivity. Teams move from ad-hoc to standardized security in one week.
In Conclusion
Each tool has different security features. Claude Code offers granular file, command, and network permissions. GitHub Copilot provides organization-wide policies and content exclusions. Cursor requires Privacy Mode and Workspace Trust properly configured. Codex CLI provides autonomous agent capabilities with sandbox isolation. Roo Code offers open-source auditability with self-hosting options.
The principles remain consistent across tools:
- Block credential files with deny rules or exclusion patterns
- Use environment variables for all secrets
- Document security standards in AGENTS.md
- Teach AI tools to generate secure code, not just fix insecure code afterward
- Start with one control, iterate weekly
Pick one security control from this post. Just one. Maybe it's configuring content exclusions for GitHub Copilot. Maybe it's configuring deny rules in Claude Code settings. Maybe it's enabling Workspace Trust in Cursor. Maybe it's reviewing your Codex CLI sandboxing settings. Maybe it's creating a .rooignore file for Roo Code.
Configure it this week. Document it in AGENTS.md. Share it with your team.
Next week, add one more security control. Then another. Each iteration takes an hour.
Before you know it, your team will have a security system where:
- AI tools accelerate development without exposing credentials
- Every tool respects the same security boundaries
- New developers inherit secure configurations on day one
- Code reviews focus on features, not "did you check if AI leaked secrets?"
The key takeaway: Your team's first AI security project shouldn't be responding to a credential leak. It should be configuring the permission controls and exclusions that prevent leaks from happening. Start there. Everything else follows.
