AI Innovation Security Researcher
Tel Aviv-Yafo, Tel Aviv District, IL
⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️
Cycode
Application Security Posture Management (ASPM) Platform for developer security that can integrate or replace existing security testing tools while providing visibility, prioritization & remediation of vulnerabilities across the entire SDLC.Description
About the Role
We’re seeking an AI Innovation Security Researcher to serve as the critical link between our AI development team and our security experts. In this role, you will:
- Translate real-world security challenges into AI-driven solutions
- Shape prompt strategies and model workflows for security use-cases
- Contribute to AI system development—help architect, prototype, and iterate on models and pipelines
- Design and execute rigorous benchmarks to evaluate the performance of security-focused AI tools
Your work will power capabilities such as automated exploitability checks for SAST/SCA findings, AI-guided remediation of container vulnerabilities (e.g. Dockerfile misconfigurations, unsafe downloads), and detection/analysis of data leaks. You’ll also help amplify our thought leadership by authoring blogs and delivering conference talks on cutting-edge AI-security topics.
Key Responsibilities
Research & Benchmarking
- Define evaluation frameworks for AI models tackling security tasks
- Build test suites for exploitability analysis (e.g. proof-of-concept generation, severity scoring)
- Measure and report on model accuracy, false-positive/negative rates, and robustness
AI Collaboration & Development
- Work with ML engineers to craft and refine prompt templates for security scenarios
- Contribute to model architecture design, fine-tuning, and deployment workflows
- Investigate model behaviors, iterate on training data, and integrate new AI architectures as needed
Security Expertise & Tooling
- Apply deep knowledge of static and software composition analysis (SAST/SCA)
- Analyze container build pipelines to identify vulnerability origins and remediation paths
- Leverage vulnerability databases (CVE, NVD), threat modeling, and risk assessment techniques
Content Creation & Evangelism
- Write technical blog posts, whitepapers, and documentation on AI-driven security solutions
- Present findings at internal brown-bags and external conferences
- Mentor teammates on AI security best practices
Requirements
- Bachelor’s or Master’s degree in Computer Science, Cybersecurity, AI/ML, or related field
- 3+ years in security research, application security engineering
- Hands-on with LLMs (e.g. GPT, PaLM), prompt engineering, or fine-tuning workflows
- Proficient in Python
- Deep understanding of SAST/SCA tools (e.g. SonarQube, Snyk) and their outputs
- Familiarity with container security tooling (Docker, Kubernetes, Trivy)
- Strong data analysis skills for evaluating model outputs and security telemetry
- Excellent written and verbal communication; ability to distill complex topics for diverse audiences
- Collaborative mindset; experience working across research, engineering, and security teams
Preferred Qualifications
- Experience in AI/ML system development—model training, fine-tuning, and production deployment
- Publications or presentations in AI, security, or DevSecOps venues
- Prior work developing open-source security tools or frameworks
- Experience with cloud security services (AWS/Azure/GCP) and infrastructure-as-code scanning
- Familiarity with CI/CD pipelines and MLOps tooling
* Salary range is an estimate based on our InfoSec / Cybersecurity Salary Index 💰
Tags: Application security AWS Azure CI/CD Cloud Computer Science Content creation DevSecOps Docker GCP Kubernetes LLMs Python Risk assessment SAST SonarQube Vulnerabilities
Perks/benefits: Conferences
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.