├── resources
├── images
│ └── AI Shared Responsibility Model.png
└── README.md
├── .gitignore
├── LICENSE
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── framework
├── responsibility-matrix.md
├── deployment-models.md
└── security-domains.md
└── README.md
/resources/images/AI Shared Responsibility Model.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/mikeprivette/ai-security-shared-responsibility/HEAD/resources/images/AI Shared Responsibility Model.png
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # OS Generated Files
2 | .DS_Store
3 | .DS_Store?
4 | ._*
5 | .Spotlight-V100
6 | .Trashes
7 | ehthumbs.db
8 | Thumbs.db
9 |
10 | # IDE Files
11 | .vscode/
12 | .idea/
13 | *.swp
14 | *.swo
15 | *~
16 | .project
17 | .classpath
18 | .settings/
19 |
20 | # Documentation Build
21 | docs/_build/
22 | site/
23 |
24 | # Temporary Files
25 | *.tmp
26 | *.bak
27 | *.backup
28 | tmp/
29 | temp/
30 |
31 | # Logs
32 | *.log
33 | logs/
34 |
35 | # Environment Variables
36 | .env
37 | .env.local
38 | .env.*.local
39 |
40 | # Dependencies
41 | node_modules/
42 | vendor/
43 |
44 | # Python
45 | __pycache__/
46 | *.py[cod]
47 | *$py.class
48 | *.so
49 | .Python
50 | env/
51 | venv/
52 | ENV/
53 |
54 | # Testing
55 | coverage/
56 | .coverage
57 | htmlcov/
58 | .pytest_cache/
59 |
60 | # Archive Files
61 | *.zip
62 | *.tar.gz
63 | *.rar
64 |
65 | # Custom
66 | .private/
67 | drafts/
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2025 Mike Privette
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | # Code of Conduct
2 |
3 | ## Our Standards
4 |
5 | This is a professional framework for AI security. Keep discussions focused on improving the framework and helping others implement it effectively.
6 |
7 | ### Expected Behavior
8 |
9 | - Be professional and respectful
10 | - Focus on constructive feedback
11 | - Help others learn and implement
12 | - Share knowledge and experiences
13 | - Accept differing viewpoints
14 |
15 | ### Unacceptable Behavior
16 |
17 | - Harassment or discrimination
18 | - Personal attacks
19 | - Publishing private information
20 | - Trolling or inflammatory comments
21 | - Anything inappropriate for a professional setting
22 |
23 | ## Enforcement
24 |
25 | Violations may result in removal of comments, blocking, or banning from the repository.
26 |
27 | ## Reporting
28 |
29 | Report issues via GitHub's reporting mechanisms or open an issue for discussion.
30 |
31 | ## Scope
32 |
33 | This applies to all project spaces including issues, discussions, pull requests, and any other project-related communication.
34 |
35 | ---
36 |
37 | The goal is better AI security for everyone. Keep it professional and productive.
--------------------------------------------------------------------------------
/resources/README.md:
--------------------------------------------------------------------------------
1 | # Additional Resources
2 |
3 | ## External Frameworks and Standards
4 |
5 | ### Regulatory Frameworks
6 | - [EU AI Act](https://artificialintelligenceact.eu/) - European Union AI regulation
7 | - [NIST AI Risk Management Framework](https://www.nist.gov/itl/ai-risk-management-framework) - US government AI RMF
8 | - [ISO/IEC 23053](https://www.iso.org/standard/74438.html) - Framework for AI using ML
9 | - [Singapore Model AI Governance Framework](https://www.pdpc.gov.sg/Help-and-Resources/2020/01/Model-AI-Governance-Framework) - ASEAN approach
10 |
11 | ### Industry Resources
12 | - [MITRE ATLAS](https://atlas.mitre.org/) - Adversarial Threat Landscape for AI Systems
13 | - [OWASP Top 10 for LLM](https://owasp.org/www-project-top-10-for-large-language-model-applications/) - LLM security risks
14 | - [AI Incident Database](https://incidentdatabase.ai/) - Repository of AI incidents
15 | - [Partnership on AI](https://partnershiponai.org/) - Best practices for responsible AI
16 |
17 | ## Disclaimer
18 |
19 | External links and resources are provided for informational purposes only. Inclusion does not imply endorsement. Always evaluate resources based on your organization's specific needs and requirements.
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Contributing
2 |
3 | Thanks for helping improve the AI Security Shared Responsibility Framework.
4 |
5 | ## Quick Start
6 |
7 | 1. **Found an issue?** Open an issue to discuss
8 | 2. **Have an idea?** Start with an issue or discussion
9 | 3. **Ready to contribute?** Fork, make changes, submit PR
10 |
11 | ## What We Need
12 |
13 | ### High Priority
14 | - **Real implementation experiences** - What worked? What didn't?
15 | - **Missing deployment models or domains** - What are we not covering?
16 | - **Practical templates** - Risk assessments, policies, checklists
17 |
18 | ### Always Welcome
19 | - Typo fixes
20 | - Clarifications
21 | - Better examples
22 | - Additional resources
23 |
24 | ## Making Changes
25 |
26 | ### For Small Changes (typos, clarity)
27 | 1. Fork the repo
28 | 2. Make your changes
29 | 3. Submit a PR with a clear description
30 |
31 | ### For Larger Changes
32 | 1. Open an issue first to discuss
33 | 2. Get feedback on the approach
34 | 3. Fork and implement
35 | 4. Submit PR referencing the issue
36 |
37 | ## Pull Request Guidelines
38 |
39 | Keep PRs focused:
40 | - One logical change per PR
41 | - Clear commit messages
42 | - Update relevant documentation
43 | - Link to any related issues
44 |
45 | ## Code of Conduct
46 |
47 | See [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md). TL;DR: Be professional.
48 |
49 | ## Questions?
50 |
51 | - Open an issue
52 | - Start a discussion
53 |
54 | ## Recognition
55 |
56 | Contributors are recognized in:
57 | - GitHub contributors list
58 | - Release notes
59 |
60 | ---
61 |
62 | The goal is simple: make AI security responsibilities clear for everyone. Your experience and feedback make that possible.
--------------------------------------------------------------------------------
/framework/responsibility-matrix.md:
--------------------------------------------------------------------------------
1 | # AI Security Shared Responsibility Matrix
2 |
3 | ## Overview
4 |
5 | This comprehensive 8x16 matrix maps security responsibilities across all deployment models and security domains.
6 |
7 | ## Quick Reference
8 |
9 | ### Responsibility Levels
10 |
11 | - **Provider**: Primary responsibility lies with the AI service provider
12 | - **Customer**: Primary responsibility lies with the customer organization
13 | - **Shared**: Responsibilities are distributed between the provider and the customer
14 | - **N/A**: Domain not typically relevant for this deployment model
15 |
16 | ## Complete Responsibility Matrix
17 |
18 | | Security Domain | SaaS AI | PaaS AI | IaaS AI | On-Premises | Embedded AI | Agentic AI | AI Coding | MCP Systems |
19 | |-----------------|---------|---------|---------|-------------|-------------|------------|-----------|-------------|
20 | | **Application Security** | Provider | Shared | Customer | Customer | Shared | Customer | Customer | Customer |
21 | | **AI Ethics and Safety** | Shared | Shared | Shared | Shared | Shared | Customer | Shared | Customer |
22 | | **Model Security** | Provider | Provider | Provider | Shared | Provider | Shared | Provider | Shared |
23 | | **User Access Control** | Customer | Customer | Customer | Customer | Customer | Customer | Customer | Customer |
24 | | **Data Privacy** | Shared | Shared | Shared | Customer | Shared | Customer | Customer | Customer |
25 | | **Data Security** | Provider | Shared | Shared | Customer | Provider | Customer | Provider | Customer |
26 | | **Monitoring and Logging** | Shared | Shared | Customer | Customer | Provider | Customer | Provider | Customer |
27 | | **Compliance and Governance** | Shared | Shared | Customer | Customer | Provider | Customer | Provider | Customer |
28 | | **Supply Chain Security** | Provider | Shared | Shared | Customer | Provider | Shared | Provider | Customer |
29 | | **Network Security** | Provider | Shared | Customer | Customer | Provider | Customer | Provider | Customer |
30 | | **Infrastructure Security** | Provider | Shared | Customer | Customer | Provider | Customer | Provider | Customer |
31 | | **Incident Response** | Shared | Shared | Customer | Customer | Shared | Customer | Shared | Customer |
32 | | **Agent Governance** ★ | Shared | Shared | Customer | Customer | N/A | Customer* | N/A | Customer |
33 | | **Code Generation Security** ★ | N/A | N/A | N/A | N/A | N/A | N/A | Customer* | N/A |
34 | | **Context Pollution Protection** ★ | Shared | Shared | Customer | Customer | Shared | Shared | Customer | Customer* |
35 | | **Multi-System Integration** ★ | Shared | Shared | Customer | Customer | Shared* | Shared | Customer | Customer* |
36 |
37 | *★ = AI-Specific Domain | \* = Critical Focus Area for this deployment model*
38 |
39 | ## Understanding Shared Responsibilities
40 |
41 | When marked as "Shared", responsibilities typically divide as follows:
42 |
43 | ### Provider Responsibilities
44 | - Platform-level security controls
45 | - Base infrastructure and features
46 | - Core compliance certifications
47 | - Model and service updates
48 |
49 | ### Customer Responsibilities
50 | - Configuration and implementation
51 | - Data governance and classification
52 | - Usage policies and procedures
53 | - Organizational compliance
54 |
55 | ## Key Patterns
56 |
57 | ### By Control Level
58 | - **Most Provider Control**: SaaS AI, Embedded AI
59 | - **Balanced Control**: PaaS AI, Agentic AI
60 | - **Most Customer Control**: IaaS AI, On-Premises, MCP Systems
61 |
62 | ### By Complexity
63 | - **Simplest**: SaaS AI (but still requires customer security efforts)
64 | - **Moderate**: PaaS AI, Embedded AI
65 | - **Complex**: IaaS AI, Agentic AI, AI Coding, MCP Systems
66 | - **Most Complex**: On-Premises (full stack ownership)
67 |
68 | ### Critical Focus Areas by Deployment Model
69 | - **Agentic AI**: Agent Governance requires special attention
70 | - **AI Coding**: Code Generation Security is essential
71 | - **MCP Systems**: Context Pollution and Multi-System Integration are critical
72 | - **Embedded AI**: Multi-System Integration needs focus
73 |
74 | ## Quick Decision Guide
75 |
76 | ### Choose SaaS AI when:
77 | - Quick deployment is priority
78 | - Limited security resources
79 | - Standard use cases
80 | - Acceptable to share infrastructure
81 |
82 | ### Choose PaaS AI when:
83 | - Need customization
84 | - Have security expertise
85 | - Require specific integrations
86 | - Want balanced control
87 |
88 | ### Choose IaaS AI when:
89 | - Need full control over models
90 | - Have strong security team
91 | - Require specific configurations
92 | - Custom deployment requirements
93 |
94 | ### Choose On-Premises when:
95 | - Maximum control required
96 | - Air-gapped environments
97 | - Regulatory requirements
98 | - Complete ownership needed
99 |
100 | ## Using This Matrix
101 |
102 | 1. **Identify** your deployment model(s)
103 | 2. **Review** responsibilities for each security domain
104 | 3. **Assess** your current coverage
105 | 4. **Plan** to address gaps in "Customer" domains
106 | 5. **Coordinate** on "Shared" responsibilities
107 |
108 | ## Important Notes
109 |
110 | - **No model is responsibility-free** - Even SaaS requires customer security efforts
111 | - **Shared means coordination** - Both parties must fulfill their obligations
112 | - **AI-specific domains (13-16)** - Represent emerging security challenges unique to AI systems
113 | - **Evolution is normal** - Organizations often use multiple models simultaneously
114 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |

4 |
5 | # AI Security Shared Responsibility Model
6 |
7 | 
8 | [](https://opensource.org/licenses/MIT)
9 | [](https://github.com/mikeprivette/ai-security-shared-responsibility/releases)
10 |
11 | ### Clear security ownership for every AI deployment model
12 |
13 | **[Quick Start](#quick-start) • [Framework](#the-framework) • [Deployment Models](#8-deployment-models) • [Security Domains](#16-security-domains) • [About](#about)**
14 |
15 |
16 |
17 | ---
18 |
19 | ## The Problem
20 |
21 | AI is transforming industries at unprecedented pace, but security ownership remains unclear. Organizations deploying AI systems—from simple ChatGPT usage to complex custom models—lack clarity on who's responsible for what.
22 |
23 | This gap creates risk. Without clear ownership boundaries, critical security tasks fall through the cracks. Data governance, model security, and compliance requirements become nobody's responsibility—until something goes wrong.
24 |
25 | The shared responsibility model solved this for cloud computing. Now AI needs the same clarity.
26 |
27 | ## What This Is
28 |
29 | A framework for understanding security responsibilities across AI deployments. Like cloud computing's shared responsibility model, this framework maps who owns what across **8 deployment models** and **16 security domains**.
30 |
31 | Whether you're using ChatGPT, building custom models, or deploying autonomous agents, this framework shows exactly what you're responsible for—and what your providers handle.
32 |
33 | ## Quick Start
34 |
35 |
36 |
37 | | **If you are a...** | **Start here** | **Focus on** |
38 | |:---:|:---:|:---|
39 | | **Security Leader** | [Responsibility Matrix](framework/responsibility-matrix.md) | Understanding your obligations across all AI initiatives |
40 | | **AI Practitioner** | [Deployment Models](framework/deployment-models.md) | Identifying which model fits your use case |
41 | | **Architect** | [Security Domains](framework/security-domains.md) | Comprehensive security coverage areas |
42 | | **Getting Started** | [This Section](#getting-started) | Step-by-step implementation guide |
43 |
44 |
45 |
46 | ## Why This Framework vs Others
47 |
48 | Think of this as your **Day 1 framework**—what you need before diving into technical specifications.
49 |
50 | | **Framework** | **Best For** | **When to Use** | **Limitation** |
51 | |:---|:---|:---|:---|
52 | | **🎯 This Framework** | Initial alignment & planning | Before deployment decisions | Less technical depth |
53 | | **NIST AI RMF** | Comprehensive risk management | Mature AI programs | Assumes AI maturity |
54 | | **CSA Models** | Cloud-specific implementations | Azure/AWS deployments | Too narrow for full AI landscape |
55 | | **Microsoft Approach** | Azure ecosystem | Technical implementation | Vendor-specific |
56 |
57 | Other frameworks assume you already know your deployment model and have organizational alignment. This framework helps you **build that alignment first**.
58 |
59 | ## The Framework
60 |
61 | ### Core Components
62 |
63 |
64 |
65 | | **Component** | **What It Covers** | **Key Insight** |
66 | |:---:|:---|:---|
67 | | **[8 Deployment Models](framework/deployment-models.md)** | From SaaS to on-premises, agents to assistants | Each model has distinct security boundaries |
68 | | **[16 Security Domains](framework/security-domains.md)** | Traditional + AI-specific (marked with ★) | New domains like agent governance are critical now |
69 | | **[Responsibility Matrix](framework/responsibility-matrix.md)** | Complete 8x16 mapping | Visual guide to all responsibilities |
70 |
71 |
72 |
73 | ### Key Principles
74 |
75 | - **No deployment is responsibility-free** - Even SaaS requires customer security efforts
76 | - **Control = Responsibility** - More control means more security obligations
77 | - **Shared requires coordination** - Both parties must fulfill their parts
78 | - **New domains matter now** - Agent governance isn't a future problem
79 |
80 | ## Getting Started
81 |
82 | 1. **📍 Identify** your AI deployment model(s) using the [deployment models guide](framework/deployment-models.md)
83 | 2. **✅ Check** the [responsibility matrix](framework/responsibility-matrix.md) for your obligations
84 | 3. **📋 Review** the [security domains](framework/security-domains.md) to understand coverage areas
85 | 4. **🎯 Plan** improvements based on identified gaps
86 |
87 | ## 8 Deployment Models
88 |
89 | Comprehensive coverage from simple SaaS to complex autonomous systems:
90 |
91 | ### Cloud-Based Models
92 | 1. **SaaS AI Models** - ChatGPT, Claude, Gemini (Public & Private)
93 | 2. **PaaS AI Models** - Azure OpenAI, AWS Bedrock, Google AI Platform
94 | 3. **IaaS AI Models** - Custom models on cloud infrastructure
95 |
96 | ### Self-Managed & Specialized
97 | 4. **On-Premises AI Models** - Local LLMs, air-gapped systems
98 | 5. **SaaS Products with Embedded AI** - Salesforce Einstein, MS Copilot
99 | 6. **Agentic AI Systems** - Autonomous multi-agent configurations
100 | 7. **AI Coding Assistants** - GitHub Copilot, Cursor, Claude Code
101 | 8. **MCP-Based Systems** - Persistent memory & context systems
102 |
103 | [→ Full deployment models guide](framework/deployment-models.md)
104 |
105 | ## 16 Security Domains
106 |
107 | Comprehensive coverage across traditional and emerging AI security areas:
108 |
109 | **Traditional Domains (1-12)**
110 | - Application Security
111 | - AI Ethics and Safety
112 | - Model Security
113 | - User Access Control
114 | - Data Privacy
115 | - Data Security
116 | - Monitoring and Logging
117 | - Compliance and Governance
118 | - Supply Chain Security
119 | - Network Security
120 | - Infrastructure Security
121 | - Incident Response
122 |
123 | **Emerging AI Domains (13-16)** ★
124 | - **Agent Governance** - Control of autonomous AI agents
125 | - **Code Generation Security** - AI-generated code protection
126 | - **Context Pollution Protection** - Preventing false information injection
127 | - **Multi-System Integration Security** - Cross-system AI orchestration
128 |
129 | [→ Full security domains guide](framework/security-domains.md)
130 |
131 | Securing an AI system is a multi-faceted challenge that requires attention to various domains and usage states. As the deployment models evolve, so too will these focus areas.
132 |
133 | ## Contributing
134 |
135 | This framework improves with real-world input. Looking for:
136 | - Implementation experiences
137 | - Framework improvements
138 | - Templates and tools
139 |
140 | See [CONTRIBUTING.md](CONTRIBUTING.md) for details or open an issue to start a discussion.
141 |
142 | ## Evolution
143 |
144 | - **August 2024**: [Original framework published](https://www.returnonsecurity.com/p/ai-security-shared-responsibility-model-navigating-risks-ai-deployment)
145 | - **September 2025**: Expanded to 8 models and 16 domains, open sourced
146 |
147 | The framework has grown from 4 to 8 deployment models and added 4 emerging security domains based on how AI security has evolved over the past year.
148 |
149 | ## About
150 |
151 | Created by [Mike Privette](https://www.linkedin.com/in/mikeprivette/), founder of [Return on Security](https://returnonsecurity.com).
152 |
153 | Questions? Open an issue to start a discussion.
154 |
155 | ## License
156 |
157 | MIT - See [LICENSE](LICENSE) file.
158 |
--------------------------------------------------------------------------------
/framework/deployment-models.md:
--------------------------------------------------------------------------------
1 | # AI Deployment Models
2 |
3 | ## Overview
4 |
5 | This document provides detailed descriptions of the eight AI deployment models covered in the AI Security Shared Responsibility Framework. Each model has unique characteristics, risk profiles, and responsibility distributions between providers and organizations.
6 |
7 | ## Table of Contents
8 |
9 | 1. [SaaS AI Models](#1-saas-ai-models)
10 | 2. [PaaS AI Models](#2-paas-ai-models)
11 | 3. [IaaS AI Models](#3-iaas-ai-models)
12 | 4. [On-Premises AI Models](#4-on-premises-ai-models)
13 | 5. [SaaS Products with Embedded AI](#5-saas-products-with-embedded-ai)
14 | 6. [Agentic AI Systems](#6-agentic-ai-systems)
15 | 7. [AI Coding Assistants](#7-ai-coding-assistants)
16 | 8. [MCP-Based Systems](#8-mcp-based-systems)
17 |
18 | ---
19 |
20 | ## 1. SaaS AI Models
21 |
22 | ### Description
23 | AI services consumed as software-as-a-service offerings, where the provider manages the infrastructure, model, and platform.
24 |
25 | ### Examples
26 |
27 | **Public SaaS:**
28 | - ChatGPT (OpenAI)
29 | - Claude (Anthropic)
30 | - Gemini (Google)
31 | - Perplexity
32 |
33 | **Private SaaS:**
34 | - Enterprise ChatGPT
35 | - Custom organizational AI deployments
36 | - Dedicated tenant instances
37 |
38 | ### Risk Profile
39 | - **Public SaaS**: High risk due to shared infrastructure and limited control
40 | - **Private SaaS**: Moderate risk with enhanced isolation and controls
41 |
42 | ### Key Characteristics
43 | - Minimal infrastructure management required
44 | - Quick deployment and scaling
45 | - Limited customization options
46 | - Shared responsibility for data security
47 |
48 | ### Security Considerations
49 | - Data residency and sovereignty
50 | - API security and rate limiting
51 | - Prompt injection protection
52 | - Output filtering and validation
53 | - User access management
54 | - Data classification requirements
55 |
56 | ---
57 |
58 | ## 2. PaaS AI Models
59 |
60 | ### Description
61 | Platform services that provide tools and infrastructure for deploying and managing AI models, offering a balance between control and convenience.
62 |
63 | ### Examples
64 | - Azure OpenAI Service
65 | - Google AI Platform
66 | - AWS Bedrock
67 | - IBM Watson
68 | - Databricks ML
69 |
70 | ### Risk Profile
71 | **Moderate** - Balanced control between provider and organization with customizable security configurations.
72 |
73 | ### Key Characteristics
74 | - Model hosting and management tools
75 | - Integration with cloud services
76 | - Customizable deployment options
77 | - Shared infrastructure with isolation
78 |
79 | ### Security Considerations
80 | - Secure model deployment pipelines
81 | - API gateway configuration
82 | - Network isolation and segmentation
83 | - Model versioning and rollback
84 |
85 | ---
86 |
87 | ## 3. IaaS AI Models
88 |
89 | ### Description
90 | AI models deployed on cloud infrastructure services where organizations have control over the operating system and applications.
91 |
92 | ### Examples
93 | - Custom models on AWS EC2
94 | - AI workloads on Google Compute Engine
95 | - Azure Virtual Machines running AI systems
96 | - GPU clusters for model training
97 | - Containerized AI deployments (Kubernetes)
98 |
99 | ### Risk Profile
100 | **Moderate to High** - Greater organizational control and responsibility with higher potential for misconfigurations.
101 |
102 | ### Key Characteristics
103 | - Full control over compute environment
104 | - Custom security configurations
105 | - Complex management requirements
106 | - Flexible scaling options
107 |
108 | ### Security Considerations
109 | - OS hardening and patching
110 | - Container security
111 | - Network security groups
112 | - Secure model storage
113 | - GPU security considerations
114 |
115 | ### Model Security Note
116 |
117 | Most organizations deploy pre-trained foundation models (LLaMA, Mistral, GPT variants) rather than building and training custom models from scratch. The model provider has built-in protections against extraction attacks and other model-level vulnerabilities, while the customer handles deployment security, access controls, storage protection, and other related aspects.
118 |
119 | ---
120 |
121 | ## 4. On-Premises AI Models
122 |
123 | ### Description
124 | AI systems deployed entirely on internal hardware within the organization's data centers.
125 |
126 | ### Examples
127 | - Locally hosted LLMs (LLaMA, Mistral)
128 | - Edge AI deployments
129 | - Air-gapped AI systems
130 | - Private GPU clusters
131 | - Embedded AI in IoT devices
132 |
133 | ### Risk Profile
134 | **Variable** - Highest organizational responsibility with complete control over security measures.
135 |
136 | ### Key Characteristics
137 | - Complete organizational control
138 | - No external dependencies
139 | - Maximum customization possible
140 | - Full responsibility for all aspects
141 |
142 | ### Security Considerations
143 | - Physical security
144 | - Hardware supply chain
145 | - Complete security stack management
146 | - Disaster recovery planning
147 | - Isolated network requirements
148 |
149 | ### Model Security Note
150 |
151 | Most organizations deploy pre-trained foundation models (LLaMA, Mistral, GPT variants) rather than building and training custom models from scratch. The model provider has built-in protections against extraction attacks and other model-level vulnerabilities, while the customer handles deployment security, access controls, storage protection, and other related aspects.
152 |
153 | ---
154 |
155 | ## 5. SaaS Products with Embedded AI
156 |
157 | ### Description
158 | Traditional business applications that have added AI capabilities as features you can interact with and control. The AI is exposed as a feature you actively use, not just background functionality.
159 |
160 | ### Examples
161 | - Salesforce Einstein (CRM where you can query AI for insights)
162 | - Microsoft 365 Copilot (You prompt it to generate content)
163 | - Slack AI (You ask it questions about your workspace)
164 | - ServiceNow AI (You interact with AI for automation decisions)
165 | - Adobe Firefly (You direct AI to create/modify designs)
166 | - Notion AI (You prompt it to write, summarize, or analyze)
167 |
168 | ### Risk Profile
169 | **Moderate to High** - AI capabilities embedded within business processes may access broader data sets than traditional features.
170 |
171 | ### Key Characteristics
172 | - AI seamlessly integrated into familiar tools
173 | - Users may not recognize AI-specific risks
174 | - Data exposure through AI features
175 | - Compliance complexity increases
176 |
177 | ### Security Considerations
178 | - AI feature access controls
179 | - Data classification for AI processing
180 | - Prompt data leakage prevention
181 | - Cross-application data access
182 | - Shadow AI within approved tools
183 |
184 | ---
185 |
186 | ## 6. Agentic AI Systems
187 |
188 | ### Description
189 | Autonomous AI systems capable of independent decision-making and action, often working in multi-agent configurations.
190 |
191 | *Note: The definition of "agent" varies across the industry, but I think Daniel Miessler has a good definition set on [RAID (Real World AI Definitions)](https://danielmiessler.com/blog/raid-ai-definitions)
192 |
193 | ### Examples
194 | - Multi-agent customer service systems
195 | - Autonomous trading algorithms
196 | - Supply chain optimization agents
197 | - Cybersecurity response automation
198 | - Robotic process automation with AI
199 | - Autonomous research assistants
200 |
201 | ### Risk Profile
202 | **High** - Autonomous capabilities create cascading risk potential and complex accountability challenges.
203 |
204 | ### Key Characteristics
205 | - Independent decision-making
206 | - Multi-agent coordination
207 | - Complex action chains
208 | - Escalation requirements
209 | - Audit trail complexity
210 |
211 | ### Security Considerations
212 | - Agent authority limits
213 | - Decision audit trails
214 | - Failsafe mechanisms
215 | - Human-in-the-loop controls
216 | - Multi-agent coordination security
217 | - Preventing agent manipulation
218 |
219 | ---
220 |
221 | ## 7. AI Coding Assistants
222 |
223 | ### Description
224 | AI systems that assist with software development tasks, from code generation to debugging and documentation.
225 |
226 | ### Examples
227 | - GitHub Copilot
228 | - Cursor
229 | - Claude Code
230 | - Amazon CodeWhisperer
231 | - Tabnine
232 | - Replit Ghostwriter
233 | - JetBrains AI Assistant
234 |
235 | ### Risk Profile
236 | **Moderate to High** - Code generation may introduce vulnerabilities and intellectual property concerns.
237 |
238 | ### Key Characteristics
239 | - Real-time code suggestions
240 | - Context-aware completions
241 | - Cross-file understanding
242 | - Integration with IDEs
243 | - Learning from codebase patterns
244 |
245 | ### Security Considerations
246 | - Secure coding practices enforcement
247 | - Intellectual property protection
248 | - License compliance verification
249 | - Vulnerability introduction prevention
250 | - Secrets and credential protection
251 | - Supply chain security
252 | - Code quality standards
253 |
254 | ---
255 |
256 | ## 8. MCP-Based Systems
257 | > ["MCPs are other people's prompts pointing us to other people's code."](https://danielmiessler.com/blog/mcps-are-just-other-peoples-prompts-and-apis) - Daniel Miessler
258 |
259 | ### Description
260 | Systems built on Model Context Protocol with persistent memory and context, enabling long-term relationship management.
261 |
262 | ### Examples
263 | - Enterprise knowledge management systems
264 | - Customer relationship management with AI memory
265 | - Investment analysis platforms
266 | - Research assistant systems
267 | - Personal AI assistants with memory
268 |
269 | ### Risk Profile
270 | **High** - Long-term data persistence and complex integrations create ongoing exposure risks.
271 |
272 | ### Key Characteristics
273 | - Persistent memory across sessions
274 | - Context accumulation over time
275 | - Multiple data source integration
276 | - Persona-based interactions
277 | - Relationship tracking
278 |
279 | ### Security Considerations
280 | - Memory integrity protection
281 | - Context pollution prevention
282 | - Cross-context isolation
283 | - Long-term data governance
284 | - Relationship data security
285 | - Memory manipulation detection
286 | - Persona separation enforcement
287 |
288 | ---
289 |
290 | ## AI-Enabled Products (Not Covered)
291 |
292 | This framework does not cover products that use AI internally to power their functionality but don't expose AI features for users to directly interact with. In these products, AI operates entirely in the background - you cannot prompt it, query it, or control its behavior.
293 |
294 | These products should still be evaluated from a security and legal standpoint regarding how they use your data for training their models. However, this is primarily a vendor management and compliance concern rather than a shared responsibility security model issue, since you have no control over the AI functionality.
295 |
296 | **Examples of AI-Enabled (background AI) products:**
297 | - **Otter.ai** - AI transcribes meetings automatically, but you can't interact with the AI
298 | - **Superhuman** - AI powers email features and writing, but you don't prompt or control it directly
299 | - **Grammarly** (basic) - AI checks grammar automatically, no user AI interaction
300 | - **Spotify** - AI creates playlists, but you don't directly engage with it
301 | - **LinkedIn** - AI suggests connections, but operates in the background
302 |
303 | For these products, focus on:
304 | - Standard vendor risk assessments
305 | - Data processing agreements
306 | - Terms of service review
307 | - Privacy policy evaluation
308 | - Compliance certification verification
309 |
310 | ---
311 |
312 | ## Deployment Model Selection Guide
313 |
314 | ### Decision Factors
315 |
316 | 1. **Data Sensitivity**
317 | - High: On-premises or Private SaaS
318 | - Medium: PaaS or IaaS
319 | - Low: Public SaaS
320 |
321 | 2. **Control Requirements**
322 | - Maximum: On-premises
323 | - High: IaaS
324 | - Moderate: PaaS
325 | - Low: SaaS
326 |
327 | 3. **Resource Availability**
328 | - Limited: SaaS or Embedded AI
329 | - Moderate: PaaS
330 | - Extensive: IaaS or On-premises
331 |
332 | 4. **Use Case Complexity**
333 | - Simple: SaaS or Embedded AI
334 | - Moderate: PaaS or Coding Assistants
335 | - Complex: IaaS, Agentic, or MCP Systems
336 |
337 | 5. **Compliance Requirements**
338 | - Strict: On-premises or Private deployments
339 | - Moderate: PaaS with compliance features
340 | - Basic: Public SaaS with agreements
341 |
342 | ### Hybrid Deployments
343 |
344 | Many organizations use multiple deployment models simultaneously:
345 |
346 | - **Development vs. Production**: SaaS for development, on-premises for production
347 | - **Tiered Approach**: Public SaaS for general use, private deployment for sensitive data
348 | - **Gradual Migration**: Starting with SaaS, moving to PaaS/IaaS as needs mature
349 | - **Specialized Systems**: Different models for different use cases
350 |
351 | ## Responsibility Overview
352 |
353 | For security responsibilities across all deployment models and domains, see the [AI Security Shared Responsibility Matrix](responsibility-matrix.md).
354 |
--------------------------------------------------------------------------------
/framework/security-domains.md:
--------------------------------------------------------------------------------
1 | # Security Domains in AI Systems
2 |
3 | ## Overview
4 |
5 | This document details the 16 security domains that comprise the AI Security Shared Responsibility Framework. These domains represent the comprehensive security considerations required for safe AI deployment.
6 |
7 | Domains 1-12 are traditional security areas adapted for AI systems, while domains 13-16 (marked with ★) are emerging security challenges unique to modern AI deployments.
8 |
9 | ---
10 |
11 | ## Security Domains
12 |
13 | ### 1. Application Security
14 |
15 | **Focus**: Security of AI applications and their interfaces
16 |
17 | **What This Actually Means**: This is about securing the actual application that uses AI - the websites, APIs, and interfaces that users interact with. It covers everything from input validation to protecting against attacks targeting the application layer.
18 |
19 | **Examples by Deployment Model**:
20 | - **SaaS AI**: OpenAI secures ChatGPT's web interface and API endpoints completely
21 | - **PaaS AI**: You configure API rate limits and input validation on Azure OpenAI Service
22 | - **IaaS AI**: You build and secure the entire application layer on AWS EC2
23 | - **On-Premises**: Complete application security from authentication to encryption
24 | - **Embedded AI**: Salesforce secures Einstein, you configure feature access and usage
25 | - **Agentic AI**: You secure agent interfaces and define action boundaries
26 | - **AI Coding**: You secure your development environment and code repositories
27 | - **MCP Systems**: You protect context system interfaces and access points
28 |
29 | **Responsibility Varies**:
30 | - **Provider (SaaS)**: Platform owns the application completely
31 | - **Shared (PaaS, Embedded)**: Provider supplies platform security, customer configures usage
32 | - **Customer (IaaS, On-Premises, Agentic, AI Coding, MCP)**: You own all application security
33 |
34 | **Key Considerations**:
35 | The main challenge shifts from basic security (in SaaS) to comprehensive application protection (in self-managed systems). Critical areas include prompt injection prevention, output validation, API security, and session management. For AI-specific applications, focus on context window security and token limit enforcement.
36 |
37 | ---
38 |
39 | ### 2. AI Ethics and Safety
40 |
41 | **Focus**: Responsible AI deployment and bias prevention
42 |
43 | **What This Actually Means**: This covers both the ethical design of AI systems and their safe usage. It's about ensuring AI doesn't cause harm, perpetuate bias, or get used for inappropriate purposes - a responsibility that can't be fully outsourced.
44 |
45 | **Examples by Deployment Model**:
46 | - **SaaS AI**: OpenAI prevents harmful outputs, but you choose appropriate use cases
47 | - **PaaS AI**: Azure provides safety controls, you implement ethical guidelines
48 | - **IaaS AI**: You configure all safety measures and ethical boundaries on your models
49 | - **On-Premises**: Complete ownership of ethical AI implementation and safety
50 | - **Embedded AI**: Salesforce ensures Einstein's base safety, you control feature usage
51 | - **Agentic AI**: You define all ethical boundaries for autonomous agent actions
52 | - **AI Coding**: GitHub ensures Copilot safety, you review generated code for appropriateness
53 | - **MCP Systems**: You own ethical use of persistent memory and long-term context
54 |
55 | **Responsibility Varies**:
56 | - **Shared (SaaS, PaaS, IaaS, On-Premises, Embedded, AI Coding)**: Providers ensure base model safety, customers ensure appropriate usage
57 | - **Customer (Agentic, MCP)**: Critical autonomous and persistent systems require full customer ownership
58 |
59 | **Key Considerations**:
60 | Ethics and safety are inherently shared because providers ensure base model safety and training ethics, while customers are responsible for appropriate use case selection, deployment context, and ongoing ethical usage. Even in provider-managed models, customers choose how to apply the AI. Focus areas include harmful content prevention, bias detection, explainability, and human oversight mechanisms.
61 |
62 | ---
63 |
64 | ### 3. Model Security
65 |
66 | **Focus**: Protection of AI models from attacks and theft
67 |
68 | **What This Actually Means**: This is about protecting the AI model itself - preventing extraction, poisoning, or manipulation. It includes securing model weights, preventing adversarial attacks, and protecting intellectual property.
69 |
70 | **Examples by Deployment Model**:
71 | - **SaaS AI**: OpenAI protects GPT models from extraction and attacks
72 | - **PaaS AI**: Azure protects base models, you secure any fine-tuning
73 | - **IaaS AI**: Provider-supplied models (LLaMA, Mistral) come with built-in protections
74 | - **On-Premises**: Provider models have base security, you handle deployment protection
75 | - **Embedded AI**: Salesforce protects Einstein's models completely
76 | - **Agentic AI**: Shared between base model security and agent-specific protections
77 | - **AI Coding**: GitHub protects Copilot's code generation models
78 | - **MCP Systems**: Complex memory model protection shared with provider
79 |
80 | **Responsibility Varies**:
81 | - **Provider (SaaS, PaaS, IaaS, Embedded, AI Coding)**: Provider protects base models
82 | - **Shared (On-Premises, Agentic, MCP)**: Provider supplies secure models, customer handles deployment
83 |
84 | **Key Considerations**:
85 | Most organizations deploy pre-trained foundation models (LLaMA, Mistral, GPT variants) rather than building custom models from scratch. The model provider has built-in protections against extraction attacks and other model-level vulnerabilities, while the customer handles deployment security, access controls, and storage protection.
86 |
87 | ---
88 |
89 | ### 4. User Access Control
90 |
91 | **Focus**: Authentication and authorization for AI systems
92 |
93 | **What This Actually Means**: This is about controlling who in your organization can access AI systems and at what permission level. It's fundamentally about organizational decisions - who should have access to what.
94 |
95 | **Examples by Deployment Model**:
96 | - **SaaS AI**: You decide who gets ChatGPT Enterprise licenses and permissions
97 | - **PaaS AI**: You configure who can access Azure OpenAI in your organization
98 | - **IaaS AI**: You implement complete IAM for your AI infrastructure
99 | - **On-Premises**: Full control over all access management systems
100 | - **Embedded AI**: You control who accesses AI features within applications
101 | - **Agentic AI**: Critical - you define who can control and monitor agents
102 | - **AI Coding**: You manage developer access to AI coding assistants
103 | - **MCP Systems**: You control access to persistent memory and context systems
104 |
105 | **Responsibility Varies**:
106 | - **Customer (All Models)**: Access control is always a customer responsibility
107 |
108 | **Key Considerations**:
109 | While providers may supply authentication mechanisms and IAM tools, the customer always owns the fundamental decisions about WHO in their organization gets access and at WHAT permission levels. This remains a customer responsibility across all deployment models. Key areas include API key management, prompt history protection, and model access permissions.
110 |
111 | ---
112 |
113 | ### 5. Data Privacy
114 |
115 | **Focus**: Protection of personal and sensitive information
116 |
117 | **What This Actually Means**: This covers how personal and sensitive data is handled when processed by AI systems. It includes consent, data minimization, and compliance with privacy regulations like GDPR.
118 |
119 | **Examples by Deployment Model**:
120 | - **SaaS AI**: ChatGPT processes data per OpenAI policies, you ensure consent
121 | - **PaaS AI**: Azure provides privacy controls, you implement data governance
122 | - **IaaS AI**: You implement all privacy controls for data processing
123 | - **On-Premises**: Complete control over data privacy implementation
124 | - **Embedded AI**: Salesforce handles Einstein privacy, you manage customer data consent
125 | - **Agentic AI**: You control all agent data handling and privacy
126 | - **AI Coding**: You manage code repository and development data privacy
127 | - **MCP Systems**: Critical - long-term memory requires careful privacy management
128 |
129 | **Responsibility Varies**:
130 | - **Shared (SaaS, PaaS, IaaS, Embedded)**: Provider handles platform privacy, customer manages data classification and consent
131 | - **Customer (On-Premises, Agentic, AI Coding, MCP)**: Full privacy control and responsibility
132 |
133 | **Key Considerations**:
134 | Privacy responsibilities depend on who processes the data and for what purpose. Cloud services create shared obligations under regulations like GDPR, where both processor and controller have duties. Key areas include data anonymization, consent management, and right to erasure implementation.
135 |
136 | ---
137 |
138 | ### 6. Data Security
139 |
140 | **Focus**: Confidentiality, integrity, and availability of data
141 |
142 | **What This Actually Means**: This is about protecting data at rest and in transit - encryption, access controls, and preventing data loss. It covers both the data fed into AI systems and the outputs they generate.
143 |
144 | **Examples by Deployment Model**:
145 | - **SaaS AI**: OpenAI handles all encryption and storage security
146 | - **PaaS AI**: Azure provides encryption tools, you configure data handling
147 | - **IaaS AI**: Cloud provider supplies infrastructure encryption, you implement the rest
148 | - **On-Premises**: You own all aspects of data security
149 | - **Embedded AI**: Application provider handles data security completely
150 | - **Agentic AI**: You secure all agent-processed data
151 | - **AI Coding**: Provider secures the service, you secure your code
152 | - **MCP Systems**: You protect all persistent context data
153 |
154 | **Responsibility Varies**:
155 | - **Provider (SaaS, Embedded, AI Coding)**: Full platform data security
156 | - **Shared (PaaS, IaaS)**: Provider supplies tools, customer configures
157 | - **Customer (On-Premises, Agentic, MCP)**: Complete data security ownership
158 |
159 | **Key Considerations**:
160 | Data security in AI includes unique challenges like vector database protection, prompt/response storage security, and training data protection. The level of control determines responsibility - managed services handle encryption automatically while self-managed systems require comprehensive data protection strategies.
161 |
162 | ---
163 |
164 | ### 7. Monitoring and Logging
165 |
166 | **Focus**: Detection and response to security events
167 |
168 | **What This Actually Means**: This is about tracking what's happening in your AI systems - who's using them, how they're being used, and detecting when something goes wrong. It's essential for both security and compliance.
169 |
170 | **Examples by Deployment Model**:
171 | - **SaaS AI**: OpenAI monitors ChatGPT, provides usage reports
172 | - **PaaS AI**: Azure provides monitoring tools, you analyze AI usage patterns
173 | - **IaaS AI**: You implement complete monitoring for your AI infrastructure
174 | - **On-Premises**: Full monitoring stack ownership and management
175 | - **Embedded AI**: Application provider monitors, shares relevant logs
176 | - **Agentic AI**: Critical - you must monitor all agent actions and decisions
177 | - **AI Coding**: Provider monitors service health, you track usage
178 | - **MCP Systems**: You monitor context evolution and memory changes
179 |
180 | **Responsibility Varies**:
181 | - **Provider (Embedded, AI Coding)**: Provider handles monitoring
182 | - **Shared (SaaS, PaaS)**: Provider generates logs, customer must analyze for their use cases
183 | - **Customer (IaaS, On-Premises, Agentic, MCP)**: Complete monitoring ownership
184 |
185 | **Key Considerations**:
186 | AI-specific monitoring includes tracking model drift, unusual query patterns, token usage anomalies, and prompt injection attempts. In shared scenarios, providers can log activity but can't know what's normal for your specific use case - that analysis is always your responsibility.
187 |
188 | ---
189 |
190 | ### 8. Compliance and Governance
191 |
192 | **Focus**: Adherence to regulations and organizational policies
193 |
194 | **What This Actually Means**: This covers meeting regulatory requirements (like GDPR, HIPAA, or the EU AI Act) and internal governance policies. It includes documentation, audit trails, and demonstrating compliance.
195 |
196 | **Examples by Deployment Model**:
197 | - **SaaS AI**: OpenAI provides compliance certifications for the platform
198 | - **PaaS AI**: Azure offers compliance tools, you implement organizational policies
199 | - **IaaS AI**: You ensure all compliance for your AI deployment
200 | - **On-Premises**: Complete compliance ownership and implementation
201 | - **Embedded AI**: Application vendor handles their compliance scope
202 | - **Agentic AI**: You ensure agent operations meet all compliance requirements
203 | - **AI Coding**: Provider maintains service compliance, you handle code compliance
204 | - **MCP Systems**: You manage long-term data governance and retention compliance
205 |
206 | **Responsibility Varies**:
207 | - **Provider (Embedded, AI Coding)**: Platform compliance certifications
208 | - **Shared (SaaS, PaaS)**: Provider certifies platform, customer ensures organizational compliance
209 | - **Customer (IaaS, On-Premises, Agentic, MCP)**: Full compliance responsibility
210 |
211 | **Key Considerations**:
212 | Compliance in AI includes emerging regulations like the EU AI Act, alongside traditional requirements. Providers can only certify what they control - organizational use cases, data handling, and specific industry requirements are always customer responsibilities.
213 |
214 | ---
215 |
216 | ### 9. Supply Chain Security
217 |
218 | **Focus**: Security of AI development and deployment pipeline
219 |
220 | **What This Actually Means**: This covers the security of all components that make up your AI system - from models and training data to libraries and dependencies. It's about knowing and trusting what goes into your AI stack.
221 |
222 | **Examples by Deployment Model**:
223 | - **SaaS AI**: OpenAI manages all dependencies and model provenance
224 | - **PaaS AI**: Azure manages platform components, you handle custom additions
225 | - **IaaS AI**: Cloud provider supplies secure infrastructure, you manage the AI stack
226 | - **On-Premises**: You verify and secure the entire supply chain
227 | - **Embedded AI**: Application vendor manages their complete supply chain
228 | - **Agentic AI**: Complex - both agent framework and custom components need vetting
229 | - **AI Coding**: GitHub manages Copilot's training data and model pipeline
230 | - **MCP Systems**: You vet all plugins and extensions in the context system
231 |
232 | **Responsibility Varies**:
233 | - **Provider (SaaS, Embedded, AI Coding)**: Complete supply chain management
234 | - **Shared (PaaS, IaaS, Agentic)**: Split between platform and custom components
235 | - **Customer (On-Premises, MCP)**: Full supply chain responsibility
236 |
237 | **Key Considerations**:
238 | AI supply chain includes unique elements like model provenance, training data sources, and third-party model validation. The rise of foundation models means most organizations are part of a model supply chain they don't fully control, making vendor assessment critical.
239 |
240 | ---
241 |
242 | ### 10. Network Security
243 |
244 | **Focus**: Protection of network communications and infrastructure
245 |
246 | **What This Actually Means**: This covers securing how AI systems communicate - API calls, model serving endpoints, and data transfers. It includes firewalls, encryption in transit, and DDoS protection.
247 |
248 | **Examples by Deployment Model**:
249 | - **SaaS AI**: OpenAI secures all ChatGPT network infrastructure
250 | - **PaaS AI**: Azure provides network backbone, you configure virtual networks
251 | - **IaaS AI**: You configure all network security for your AI systems
252 | - **On-Premises**: Complete network security ownership
253 | - **Embedded AI**: Application provider handles all networking
254 | - **Agentic AI**: You secure all agent-to-agent and external communications
255 | - **AI Coding**: Provider secures service connectivity
256 | - **MCP Systems**: You protect all API endpoints and context system communications
257 |
258 | **Responsibility Varies**:
259 | - **Provider (SaaS, Embedded, AI Coding)**: Complete network security
260 | - **Shared (PaaS)**: Provider supplies infrastructure, customer configures
261 | - **Customer (IaaS, On-Premises, Agentic, MCP)**: Full network security ownership
262 |
263 | **Key Considerations**:
264 | AI-specific network security includes protecting model serving endpoints, securing federated learning communications, and managing API rate limits. High-bandwidth requirements for model serving create unique challenges compared to traditional applications.
265 |
266 | ---
267 |
268 | ### 11. Infrastructure Security
269 |
270 | **Focus**: Security of underlying compute and storage resources
271 |
272 | **What This Actually Means**: This is about securing the physical and virtual infrastructure that AI runs on - servers, GPUs, storage systems, and virtualization layers. It includes everything from physical security to container isolation.
273 |
274 | **Examples by Deployment Model**:
275 | - **SaaS AI**: OpenAI secures all infrastructure for ChatGPT
276 | - **PaaS AI**: Azure secures physical infrastructure, you configure virtual resources
277 | - **IaaS AI**: Cloud provider secures hardware, you manage VMs and containers
278 | - **On-Premises**: You own everything from physical security to virtualization
279 | - **Embedded AI**: Application provider manages all infrastructure
280 | - **Agentic AI**: You secure the infrastructure agents run on
281 | - **AI Coding**: Provider manages service infrastructure completely
282 | - **MCP Systems**: You secure infrastructure for persistent context storage
283 |
284 | **Responsibility Varies**:
285 | - **Provider (SaaS, PaaS, Embedded, AI Coding)**: Physical infrastructure and platform
286 | - **Customer (IaaS, On-Premises, Agentic, MCP)**: Virtual infrastructure and above
287 |
288 | **Key Considerations**:
289 | AI infrastructure has unique requirements including GPU security, high-memory system protection, and distributed training security. The massive compute requirements of AI create new attack surfaces and require specialized security controls.
290 |
291 | ---
292 |
293 | ### 12. Incident Response
294 |
295 | **Focus**: Preparation for and response to security incidents
296 |
297 | **What This Actually Means**: This is about what happens when something goes wrong - detecting incidents, responding quickly, and learning from them. It includes having plans, communication protocols, and recovery procedures.
298 |
299 | **Examples by Deployment Model**:
300 | - **SaaS AI**: OpenAI handles platform incidents, notifies you of impacts
301 | - **PaaS AI**: Azure responds to platform issues, you handle application incidents
302 | - **IaaS AI**: You manage all incident response for your AI systems
303 | - **On-Premises**: Complete incident response ownership
304 | - **Embedded AI**: Vendor handles app incidents, coordinates on AI-specific issues
305 | - **Agentic AI**: You respond to all agent-related incidents
306 | - **AI Coding**: GitHub handles service incidents, you manage code security events
307 | - **MCP Systems**: You handle all context system compromise incidents
308 |
309 | **Responsibility Varies**:
310 | - **Shared (SaaS, PaaS, Embedded, AI Coding)**: Coordination required between provider and customer
311 | - **Customer (IaaS, On-Premises, Agentic, MCP)**: Full incident response ownership
312 |
313 | **Key Considerations**:
314 | AI-specific incidents include model compromise, data poisoning, prompt injection attacks, and output manipulation. Even in shared scenarios, the customer must detect and respond to attacks targeting their specific use case. Coordination is critical when responsibilities are shared.
315 |
316 | ---
317 |
318 | ### 13. Agent Governance ★
319 |
320 | **Focus**: Control and oversight of autonomous AI agents
321 |
322 | **What This Actually Means**: This is about managing AI systems that can take actions autonomously - setting boundaries, requiring approvals for certain actions, and maintaining human oversight. It's critical for systems that can make decisions or take actions without human intervention.
323 |
324 | **Examples by Deployment Model**:
325 | - **SaaS AI**: ChatGPT plugins require governance of automated actions
326 | - **PaaS AI**: Azure AI agents need configured boundaries and oversight
327 | - **IaaS AI**: You define all governance for custom agent deployments
328 | - **On-Premises**: Complete control over agent governance frameworks
329 | - **Embedded AI**: N/A - Embedded AI typically isn't autonomous
330 | - **Agentic AI**: Critical - full governance of autonomous agent systems
331 | - **AI Coding**: N/A - Code generation isn't autonomous
332 | - **MCP Systems**: You govern how persistent context influences decisions
333 |
334 | **Responsibility Varies**:
335 | - **Shared (SaaS, PaaS)**: Provider supplies controls, customer sets policies
336 | - **Customer (IaaS, On-Premises, Agentic, MCP)**: Full governance responsibility
337 | - **N/A (Embedded, AI Coding)**: Not typically applicable
338 |
339 | **Key Considerations**:
340 | Agent governance requires defining acceptable autonomous actions, establishing escalation triggers, and maintaining audit trails. The key challenge is balancing automation benefits with risk management. Critical controls include authority limits, human intervention points, and emergency stop mechanisms.
341 |
342 | ---
343 |
344 | ### 14. Code Generation Security ★
345 |
346 | **Focus**: Security of AI-generated code and development assistance
347 |
348 | **What This Actually Means**: This is about ensuring AI-generated code is secure, compliant, and doesn't introduce vulnerabilities. It includes reviewing generated code, checking licenses, and preventing sensitive data exposure.
349 |
350 | **Examples by Deployment Model**:
351 | - **SaaS AI**: N/A - Not primarily for code generation
352 | - **PaaS AI**: N/A - Not typically used for code generation
353 | - **IaaS AI**: N/A - Infrastructure focus, not code generation
354 | - **On-Premises**: N/A - Unless specifically deployed for code generation
355 | - **Embedded AI**: N/A - Embedded in applications, not development tools
356 | - **Agentic AI**: N/A - Agents don't typically generate code
357 | - **AI Coding**: Critical - GitHub Copilot, Cursor, and similar require code security
358 | - **MCP Systems**: N/A - Context systems don't generate code
359 |
360 | **Responsibility Varies**:
361 | - **Customer (AI Coding)**: Full responsibility for code security and review
362 | - **N/A (All others)**: Not applicable to these deployment models
363 |
364 | **Key Considerations**:
365 | Code generation security is unique to AI coding assistants. Key challenges include detecting vulnerabilities in generated code, ensuring license compliance, preventing secrets in code, and maintaining code quality standards. All code must be reviewed before production use.
366 |
367 | ---
368 |
369 | ### 15. Context Pollution Protection ★
370 |
371 | **Focus**: Preventing injection of false or misleading information into AI systems
372 |
373 | **What This Actually Means**: Think of this as protecting the AI's "understanding" from being corrupted. It's like prompt injection but broader, and includes any attempt to pollute what the AI knows or believes.
374 |
375 | **Examples by Deployment Model**:
376 | - **SaaS AI**: Attacker tries to make ChatGPT believe false "facts" through conversation manipulation
377 | - **PaaS AI**: Someone poisons your fine-tuning dataset on Azure OpenAI
378 | - **IaaS AI**: Malicious data inserted into your vector database affecting RAG responses
379 | - **On-Premises**: Similar to IaaS but includes physical access risks
380 | - **Embedded AI**: Users manipulating Salesforce Einstein through crafted inputs
381 | - **Agentic AI**: False information fed to agents affecting autonomous decisions
382 | - **AI Coding**: Malicious code patterns injected to be learned and reproduced
383 | - **MCP Systems**: Critical - poisoned context persists across all future sessions
384 |
385 | **Responsibility Varies**:
386 | - **Shared (SaaS, PaaS, Embedded, Agentic)**: Providers offer input filters, customers must validate use cases
387 | - **Customer (IaaS, On-Premises, AI Coding, MCP)**: You control the full stack and all protections
388 |
389 | **Key Considerations**:
390 | The main challenge is detecting sophisticated manipulation while maintaining memory integrity. Critical controls include input validation, source verification, and context monitoring. For MCP systems especially, implement memory versioning since pollution persists across sessions.
391 |
392 | ---
393 |
394 | ### 16. Multi-System Integration Security ★
395 |
396 | **Focus**: Security across interconnected AI systems and traditional applications
397 |
398 | **What This Actually Means**: This is about securing the connections when AI systems talk to other systems - both AI and traditional. It covers API security, data flow protection, and managing the complexity of integrated systems.
399 |
400 | **Examples by Deployment Model**:
401 | - **SaaS AI**: ChatGPT calling plugins or external APIs needs secure integration
402 | - **PaaS AI**: Azure OpenAI connecting to your databases and applications
403 | - **IaaS AI**: Your custom models integrating with other services
404 | - **On-Premises**: Your AI systems connecting to internal applications
405 | - **Embedded AI**: Critical - AI features deeply integrated with application ecosystem
406 | - **Agentic AI**: Agents coordinating with each other and external services
407 | - **AI Coding**: Copilot accessing repositories and development tools
408 | - **MCP Systems**: Critical - multiple context sources and system connections
409 |
410 | **Responsibility Varies**:
411 | - **Shared (SaaS, PaaS, Embedded, Agentic)**: Provider supplies integration tools, customer secures usage
412 | - **Customer (IaaS, On-Premises, AI Coding, MCP)**: Full integration security ownership
413 |
414 | **Key Considerations**:
415 | Multi-system integration creates complex attack surfaces where vulnerabilities in one system can cascade. Key challenges include securing AI-to-AI communications, managing data flows across trust boundaries, and maintaining consistent security policies. Critical for systems with broad integration scope.
416 |
417 | ---
418 |
419 | ## Using This Guide
420 |
421 | This guide maps to the [Responsibility Matrix](responsibility-matrix.md) which shows specific responsibility assignments (Provider/Shared/Customer) for each domain across all 8 deployment models. Use both documents together to understand:
422 |
423 | 1. **What** each security domain covers (this document)
424 | 2. **Who** is responsible in your deployment model (responsibility matrix)
425 | 3. **How** to implement based on your specific needs
426 |
427 | Remember: Even in the most managed scenarios, customers retain significant security responsibilities. Understanding these domains helps you identify and address your obligations regardless of deployment model.
--------------------------------------------------------------------------------