← All Jobs
Posted May 3, 2026

Red Teaming Expert

Apply Now

About mpathic.ai

Keeping the human in AI. mpathic is a trusted leader in advancing quality and safety in AI systems through expert-led evaluation and human data. We partner with leading technology companies to support red teaming, trust & safety, expert annotation, and model evaluation across high-stakes domains.


About the Role

mpathic is seeking part-time, project-based Red Teaming Experts to support a red-teaming and evaluation campaign focused on AI safety and model behavior in sensitive, real-world interactions.



In this role, you will design, simulate, and evaluate conversations with AI systems to assess safety, risk, and behavioral performance. You will identify failure modes, edge cases, and policy gaps—particularly in scenarios involving distress, ambiguity, or escalation.



This role involves roleplaying and reviewing clinical scenarios with AI agents. As such, we are ideally seeking candidates who bring creative or performance-driven strengths, as these competencies enhance the realism, nuance, and emotional depth needed for AI safety testing. Examples of these can include, but are not limited to: 


What You’ll Be Working On 

You will help identify, prevent, and characterize risks that emerge when users interact with AI systems.



Responsibilities may include:


What We’re Looking For

Successful candidates are detail-oriented, analytically strong, and experienced in evaluating or stress-testing AI systems in complex or high-risk scenarios.



Professional experience in one or more of the following:

Strong understanding of:

Ability to identify:

Experience with or Interest in:

Comfort with:

Willingness to:

Nice to Have (Not Required)


Compensation

$30-60/hour, depending on experience and specific project tasks/difficulty

Interested in this role?Apply on iHire