Course curriculum
-
1
Module 1: Foundations of AI and AI Security
-
Evolution of AI and Security Challenges
-
Why AI Systems Need Dedicated Security Models
-
Differences Between Traditional AppSec and AI Security
-
AI Security Terminology and Core Concepts
-
Machine Learning, Deep Learning, and Generative AI
-
Supervised, Unsupervised, and Reinforcement Learning
-
Model Training, Evaluation, and Deployment Basics
-
AI Use Cases in Enterprises and AI Failure Modes
-
CIA Triad and Core Security Principles
-
Threat Modeling Basics
-
Secure Software Development Lifecycle (SSDLC)
-
Identity, Access Management, Cryptography Basics
-
Network and Cloud Security Fundamentals
-
-
2
Module 2: AI Architecture, MLOps, and Data Security
-
End-to-End AI System Architecture
-
Data Pipelines, Feature Stores, and Data Flows
-
Model Lifecycle Management
-
CI/CD for ML (MLOps)
-
Risks in AI Deployment Pipelines
-
Data Classification and Sensitivity
-
Data Poisoning and Data Tampering Attacks
-
Secure Data Storage and Access Control
-
Data Lineage and Provenance
-
Data Retention and Secure Deletion
-
-
3
Module 3: Privacy, Governance, and Responsible AI
-
Privacy Risks in AI Systems
-
Anonymization and Pseudonymization Techniques
-
Consent, Purpose Limitation, and Lawful Processing
-
Privacy by Design in AI Systems
-
AI Risk Management Frameworks
-
Policies, Standards, and Controls for AI
-
Regulatory Landscape
-
Audit, Evidence, and Compliance Reporting
-
Ethical and Responsible AI Governance
-
Relationship Between Security, Safety, and Ethics
-
Building Trustworthy AI Systems
-
-
4
Module 4: Threat Modeling and Adversarial ML
-
AI-Specific Threat Modeling Approaches
-
Attack Surfaces in ML Pipelines
-
STRIDE and MITRE ATLAS for AI
-
Abuse Case and Misuse Case Design
-
Risk Scoring and Prioritization
-
Adversarial Examples and Evasion Attacks
-
Poisoning and Backdoor Attacks
-
Model Inversion and Membership Inference
-
Model Extraction and Theft
-
Defense Strategies and Their Limitations
-
-
5
Module 5: Secure AI Development and Operations
-
Security Requirements for AI Projects
-
Secure Design and Architecture Reviews
-
Trusted Datasets and Data Validation
-
Secure Training Environments
-
Reproducibility and Integrity Checks
-
Preventing Backdoors and Trojans
-
Secure Evaluation and Benchmarking
-
Secure Model Serving Architectures
-
API Security for AI Services
-
Access Control and Rate Limiting
-
Model Integrity Verification
-
Runtime Monitoring and Drift Detection
-
Secure Deployment and Operations
-
Continuous Improvement and Maturity Models
-
-
6
Module 6: Generative AI, LLM, and Prompt Security
-
LLM Architectures and Risk Profile
-
Prompt Injection and Jailbreak Attacks
-
Data Leakage and Training Data Exposure
-
Hallucinations, Abuse, and Safety Risks
-
Guardrails, Filters, and Policy Enforcement
-
Prompt Design from a Security Perspective
-
Input Validation and Sanitization
-
Context Isolation and Tool Security
-
Reducing Prompt-Based Attack Surface
-
-
7
Module 7: AI Infrastructure and Cloud Security
-
Securing GPUs, TPUs, and AI Accelerators
-
Secrets, Keys, and Credential Management
-
Network Segmentation for AI Workloads
-
Supply Chain Security for AI Platforms
-
Shared Responsibility Model for AI
-
Securing Managed AI Services
-
Storage, Compute, and IAM Hardening
-
Multi-Tenant Risks and Isolation
-
-
8
Module 8: AI Security Operations, Detection, and Testing
-
What to Log in AI Systems?
-
Security Telemetry and SIEM Integration
-
Detecting Model Abuse and Anomalies
-
Drift, Degradation, and Data Shifts
-
Alerting and Automated Response
-
AI-Specific Incident Types
-
Containment of Model and Data Breaches
-
Forensic Analysis of Data Pipelines and Models
-
Recovery, Re-Training, and Re-Deployment
-
Post-Incident Reviews and Hardening
-
Adversarial Testing of Models and Prompts
-
Penetration Testing AI APIs and Pipelines
-
Reporting, Metrics, and Risk Communication
-
-
9
Study Material
-
Study Material
-