Information Security Specialist (US) - AI Penetration Tester

Posted 2026-05-05
Remote, USA Full-time Immediate Start

Work Location:
Mount Laurel, New Jersey, United States of America

Hours:
40

Pay Details:
$98,160 - $159,270 USD

TD is committed to providing fair and equitable compensation opportunities to all colleagues. Growth opportunities and skill development are defining features of the colleague experience at TD. Our compensation policies and practices have been designed to allow colleagues to progress through the salary range over time as they progress in their role. The base pay actually offered may vary based upon the candidate's skills and experience, job-related knowledge, geographic location, and other specific business and organizational needs.

As a candidate, you are encouraged to ask compensation related questions and have an open dialogue with your recruiter who can provide you more specific details for this role.

Line of Business:
Technology Solutions
Job Description:

The Information Security Specialist - AI Penetration Tester is responsible for conducting advanced offensive security testing across AI/ML systems, LLM integrations, GenAI platforms, and associated infrastructure. This role serves as a subject-matter expert in AI/LLM security, partnering with engineering, cyber, cloud, and architecture teams to identify vulnerabilities, improve controls, and ensure safe and compliant deployment of AI capabilities across the enterprise.

    AI/LLM Offensive Security & Vulnerability Testing
  • Conduct Penetration Tests: Design and execute comprehensive penetration tests targeting AI/ML models, LLM applications, model pipelines, retrieval systems, data agents, and AI-enabled business workflows.
  • AI/LLM Vulnerability Analysis: Identify vulnerabilities such as jailbreaking, prompt injection, model extraction, adversarial ML attacks, data poisoning, RAG bypasses, and safety guardrail circumvention.
  • Tooling & Automation: Evaluate and develop tooling (including internal utilities and open-source frameworks) to automate and scale AI/LLM security testing.
  • Security Architecture, Hardening & Risk Assessment
  • Assess Security Posture: Analyze training data governance, guardrail design, inference endpoints, system prompts, agent autonomy, model monitoring, and model-ops pipelines.
  • Risk Assessments: Perform security and safety risk analyses on new and existing AI/ML deployments, including cloud-based services, APIs, model marketplaces, and third-party LLM integrations.
  • Model Supply Chain Security: Assess AI supply chain risks, dependency integrity, and alignment with enterprise standards and regulatory obligations.
  • Documentation, Reporting & Communication
  • Report Findings: Deliver clear, actionable findings to both technical and non-technical stakeholders. Produce detailed reporting including:
  • Executive summaries
  • Technical proof-of-concepts
  • Prioritized remediation recommendations
  • Stakeholder Engagement: Collaborate with Engineering, Data Science, Cloud, Cyber Defense, Architecture, and Risk to remediate findings and improve AI security posture.
  • Governance, Standards & Continuous Improvement
  • Develop Best Practices: Contribute to organization-wide AI security standards, policies, control objectives, and hardening practices.
  • Regulatory Compliance: Ensure AI penetration testing aligns with regulatory, privacy, model safety, and internal policy requirements.
  • Continuous Learning: Maintain deep expertise in emerging AI threats, industry frameworks, evaluation methodologies, and global safety standards.
  • Incident Response & Audit Support
  • Participate in AI/ML-related security incident investigations, providing subject-matter expertise on root cause analysis and exploitation methods.
  • Support audit preparation and assist in drafting management responses, remediation plans, and risk treatment documentation.
  • Education & Experience:
  • Bachelor's degree preferred
  • Information security certification / accreditation an asset
  • 7+ years of relevant experience
  • Expert knowledge of IT security and risk disciplines and practices
  • Preferred Qualifications
    Technical Skills
  • 5+ years in application security or penetration testing, with hands-on experience in AI/ML environments preferred.
  • 7+ years of experience using penetration testing tools (Metasploit, Burp Suite, Nmap, Kali, etc.).
  • Strong knowledge of AI/LLM vulnerabilities including OWASP Top 10 for LLMs, adversarial attacks, prompt injection, and model safety testing frameworks.
  • Familiarity with scripting and automation (Python preferred), model interrogation techniques, and cloud-native AI services (Azure, AWS, GCP).
  • Experience penetration testing AI/LLM platforms, cloud workloads, and PCI-scoped environments.
  • Knowledge of security frameworks (NIST AI RMF, OWASP LLM/ML, ISO 42001, MITRE ATLAS).
  • Relevant certifications: OSCP, CEH, GPEN, CISSP, or AI/ML security certifications.
  • Experience supporting audits, compliance reviews, and incident response activities.
  • #EVMAI #TDCyberSecurity #Hybrid

Physical Requirements:

Never: 0%; Occasional: 1-33%; Fre

Similar Jobs

Back to Job Board