You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
I propose the establishment of a formal training program inspired by the gamified Capture the Flag (CTF) format, with a specific focus on AI & LLM security. The training program will be tailored for experienced cybersecurity professionals, technology leaders and researchers to address the growing skills gap in safeguarding AI technologies. This program will focus on equipping participants with the knowledge and hands-on experience needed to adapt to the rapid adoption of AI in enterprises.
The training will be available in three formats with varying content size:
A one-day (8-hour) workshop at academic conferences.
A two-day (16-hour) live training at premier cybersecurity events.
A four-week virtual boot camp on the Maven platform.
Approximately 70% of the program will be dedicated to hands-on labs, simulating real-world adversarial attacks on AI agents, LLM-based applications and other scenarios. These labs will be hosted as an application that will be accessible to attendees even after the workshop for continued learning and development. Participants will explore attack tactics and implement robust defense measures to develop actionable AI security controls. The training will emphasize the importance of AI safety and will highlight the role of security professionals in mitigating risks, a domain traditionally outside the scope of cybersecurity.
By seamlessly integrating attack and defense components, this training will offer comprehensive insights into adversarial techniques targeting AI systems and provide practical strategies for enhancing organizational defenses.
Build a comprehensive training course consisting of reading materials, slides, pointers and lab exercises.
Build a complete online platform to simulate the labs through use-cases like prompt injections, AI agent manipulations, side-stepping attacks, instances of insecure code executions though LLMs, vulnerability identification, infrastructure takeover, training data poisoning, AI Red teaming practices and much more.
Extend the platform to be accessible even after the training and possibly to a wider audience for learning and development.
Build a Slack community of like-minded individuals for continued knowledge sharing and fostering community knowledge sharing.
Three major expenses where the funding will be used:
Cost for using APIs of public LLM applications like OpenAI, Anthropic.
GPU cost for fine-tuning base model to suite the requirements of the labs.
Cloud hosting cost for running the CTF application with adversarial simulation lab.
I will be doing most of the work related with developing the content and online labs. External contracting help will be needed to complete the fornt end part of the application which will be less than 10% of the overall effort in building this workshop.
Track Record:
Previous Experience running cloud security training as full 2-day and 3-day workshops at international Cybersecurity conferences like Blackhat, RSA, DEFCON, Hack in Paris, BruCon and many more. Ran the training in over 11 different countries training over 600 cybersecurity professionals.
Experience speaking at Cybersecurity conferences on cutting edge research topics in the field of Cloud Security, AI Security, Data privacy and governance.
Experience running private, hands-on workshops as a result of my public training at Large Cybersecurity events. Focus areas would include Cloud security, malware and threat research.
Experienced Cybersecurity professional with over 14 years of industry experience working as a researcher, consultant as a leader.
Author and co-author of 4 different books in the field of Cybersecurity, inventor and co-inventor of 3 patents, published multiple whitepapers and blogs on industry channels related with the field of Security research.
Published interviews and thought leadership sessions in various forums.
Links to above mentioned workshops, professional profile, scholar profiles can be found in this Google Doc: https://docs.google.com/document/d/17ELEKYw0TBqnlyeU6p2f3NwWhEo3hmzFkhK8acKXqkA/edit?tab=t.0#heading=h.cn0py6a0ez0j
The potential causes of failure for this workshop may include insufficient participant engagement, lack of practical relevance, or logistical challenges in delivering the hands-on labs.
The success will depend on keeping the labs relevant and engaging that relates with the day to day challenges faced by cybersecurity professionals when securing AI Applications and services.
Cloud hosting and GPU usage gets too high.
NA
There are no bids on this project.