AI Cybersecurity: Attack and Defend

Course 1216

  • Duration: 3 days
  • Labs: Yes
  • Language: English
  • 17 NASBA CPE Credits (live, in-class training only)
  • Level: Intermediate

This course explores the intersection of AI and cybersecurity, starting with a foundational understanding of AI technologies such as machine learning, deep learning, and natural language processing, as well as their applications in various industries. The content delves into mitigating risks associated with AI adoption, including risk management and ethical considerations, and identifying vulnerabilities in AI systems.

The importance of integrating AI into security operations is covered through the use of AI for intrusion detection, threat intelligence, and automated incident response, as well as AI’s potential for transforming hacking techniques while highlighting AI-powered attacks and tools.  The Course also emphasizes the need for aligning AI with common security frameworks and regulatory compliance, as well as exploring future trends such as federated learning, AI-powered cyber deception, quantum computing for AI, explainable AI, and AI-driven security automation. 

AI Cybersecurity Training Delivery Methods

  • In-Person

  • Online

  • Upskill your whole team by bringing Private Team Training to your facility.

AI Cybersecurity Training Information

In this course, you will:

•    Understand AI Foundations and Applications in Security.
•    Assess Risks and Ethical Considerations in AI Adoption.
•    Analyze AI Vulnerabilities and Attack Vectors.
•    Leverage AI for Offensive and Defensive Cyber Operations.
•    Enhance Security Operations with AI.
•    Navigate AI Security Frameworks and Emerging Technologies.

Training Prerequisites

Attendees should have foundational knowledge in networking and cybersecurity.

AI Cybersecurity Training Outline

Chapter 1: Architecture and Operation of AI

  • Evolution of AI technology
  • Applying AI in Security
  • Machine Learning
  • Deep Neural Networks
  • CNN, RNN, RvNN, Transformers
  • NLP, LLM
  • Generative AI
  • LAB: Investigating Discriminative and Generative AI

Chapter 2: Risk in Adopting AI Solutions

  • Risk in Security
  • Risks of AI Implementations
  • Ethical Considerations
  • Risks With GenAI
  • Protecting From GenAI aided attacks
  • Mitigating AI Risks
  • LAB: Protecting Sensitive Data With DLP
  • LAB: Conducting an AI Risk Assessment

Chapter 3: Hacking AI Vulnerabilities

  • AI Algorithms, Data Sets, Models
  • OWASP AI Security Risks
  • Prompt Engineering
  • AI vulnerabilities
  • Attacks Against Classifiers
  • NIST Adversarial ML Taxonomy
  • Adversarial ML Threat Matrix
  • AI Red Teaming
  • LAB: Penetration Testing an AI System

Chapter 4: Exploiting AI to Hack Systems

  • Using AI to Hack
  • GenAISocial Engineering
  • Deepfakes
  • AI infused Hacking
  • Long Con AI
  • LAB: Enhance Hacking With GenAI

Chapter 5: Improving Security Operations with AI

  • SecOps
  • AI-Based Security Processes
  • IT Operations and Cloud AI
  • GenAIRed Teaming
  • AI Security Tools
  • Google AI SecOps
  • Cybersecurity Copilot
  • LAB: Defend Security With AI

Chapter 6: Common AI Security Frameworks

  • Regulatory Compliance for AI
  • NIST AI Risk Management Framework
  • OWASP Security & Governance Checklist
  • Responsible AI
  • Google Secure AI Framework
  • Federated Learning
  • Zero Trust Generative AI
  • GenAI Governance Framework

Need Help Finding The Right Training Solution?

Our training advisors are here for you.

AI Cybersecurity Training FAQs

  • Cybersecurity Professionals 
  • AI and Data Science Professionals 
  • IT Professionals 
  • Data Privacy and Compliance Officers 
  • Developers and Software Engineers 
Chat With Us