Alice.io (formerly known as ActiveFence) is a leading trust, safety, and security company. Just like ‘Alice’ herself, we go down the rabbit hole into the emerging world of AI and focus on safeguarding these technologies. In a world where AI has fundamentally changed the nature of risk, Alice offers coverage across the entire AI lifecycle.<br><br>We're currently on the ground floor of an exciting new initiative: building an "Elite Collective" of the world's premier cyber specialists—high-stakes, project-based work for the "best of the best" red-teamers in the industry.<br><br>Work is on-demand and project-dependent, scaling up and down based on the needs of the client. This is not a high-volume, repetitive role; it is an opportunity to be tapped for specific, high-impact interventions when your unique expertise is required.<br><br>The most sophisticated technology challenges in the world are waiting down the rabbit hole. We’re looking for the sharpest minds to help us secure the frontier.<br><br>The invitation is open: Come join us at the tea party.<br><br><strong>About the position:<br><br></strong><strong>As one of Alice’s Security Red Team Specialists, you’ll focus on Generative AI Models. You will play a critical role in enhancing the security and integrity of our cutting-edge AI technologies.<br><br></strong>Your primary responsibility will be to conduct analysis and testing of our generative AI systems, including but not limited to language models, image generation models, and any related infrastructure.<br><br>Your objective is to help clients secure their AI models and frameworks by identifying weaknesses, assessing risks, and providing clear steps for improvement.<br><br><strong>Key Responsibilities<br><br></strong><ul><li>Simulated Cyber Attacks: Conduct sophisticated and comprehensive simulated attacks on generative AI models and their operating environments to uncover vulnerabilities.</li><li>Vulnerability Assessment: Evaluate the security posture of AI models and infrastructure, identifying weaknesses and potential threats.</li><li>Risk Analysis: Perform thorough risk analysis to determine the impact of identified vulnerabilities and prioritize mitigation efforts.</li><li>Mitigation Strategies: Collaborate with development and security teams to develop effective strategies to mitigate identified risks and enhance model resilience.</li><li>Research and Innovation: Stay abreast of the latest trends and developments in AI security, ethical hacking, and cyber threats. Apply innovative testing methodologies to ensure cutting-edge security practices.</li><li>Documentation and Reporting: Maintain detailed documentation of all red team activities, findings, and recommendations. Prepare and present reports to senior management and relevant stakeholders.<br><br></li></ul>Requirements:<br><br><strong>Must-Have:<br><br></strong><ul><li>Proven experience in AI vulnerabilities analysis</li><li>Strong understanding of AI technologies and their underlying architectures, especially generative models and agentic frameworks.</li><li>At Least 5 years of experience in Web Penetration testing.</li><li>Excellent analytical, problem-solving, and communication skills.</li><li>Ability to work in a fast-paced, ever-changing environment.<br><br></li></ul>Nice-to-Have:<br><br><ul><li>Proficiency in Python or NodeJS</li><li>Advanced Certifications in offensive cybersecurity (e.g. OSWE, OSCE3, SEC542, SEC522) are highly desirable.</li><li>Familiarity with agentic frameworks and agentic development experience</li><li>Bachelor’s or Master’s degree in Computer Science, Information Security, or a related field.</li><li>Proven records for vulnerability disclosure, such as CVE<br><br></li></ul><strong>All applicants will be required to complete a screening exam and background check.</strong>