CYB 213 – Generative AI Privacy & Cybersecurity Risk
Course Overview
In October of 2023, the US Federal Government issued a landmark Executive Order to address the risks of Artificial Intelligence (AI). One of the primary objectives is to “develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.” In the same week, the EU finalized an AI Act to “regulate the development and use of Artificial Intelligence (AI) systems in the EU, including in the EU institutions, bodies, offices and agencies (EUIs).” This course ensures that your organization’s Cyber Defense team has knowledge of Generative AI attack vectors and recommended mitigations. Coverage includes how to identify and mitigate the cybersecurity risks associated with the Large Language Models (LLMs) that power text-based AI tools such as ChatGPT, along with the security risks inherent to system integration.
Upon successful completion of this course, you will have the knowledge and skills to identify and mitigate the top security threats and attack vectors that are being used by cybercriminals to exploit Generative AI technology, including:
- Prompt Injection
- Data Poisoning
- Insecure Output Handling
- Model Theft
- Excessive Implementation
- Insecure Plugin Integration
- Model Inversion
- Detection Bypass
- Malware Generation
- Over reliance on LLM output
Looking To Learn More?
Request more information on our courses and labs.
* required
Course Details
NICE Work Role Category
Available Languages
- English