AI Security Research Engineer
Pune, India
Qualys
Discover how Qualys helps your business measure & eliminate cyber threats through a host of cybersecurity detection & remediation tools. Try it today!Come work at a place where innovation and teamwork come together to support the most exciting missions in the world!
Company Overview
Qualys is a leading provider of cloud-based security and compliance solutions, processing vast amounts of data to help our global customers secure their networks, devices, and applications. With a strong focus on innovation and scale, Qualys empowers organizations to achieve continuous security and compliance through real-time visibility and analytics. As we continue to grow, we are looking for passionate and skilled professionals to join our mission in redefining the future of cybersecurity.
Position Overview
We are seeking an experienced and curious AI Security Researcher to explore and uncover vulnerabilities at the intersection of artificial intelligence, machine learning, and cybersecurity. You will play a critical role in identifying risks in LLM-powered systems, adversarial inputs, model manipulation techniques, prompt injection exploits, and other emerging AI threats.
This role is perfect for someone who has a strong background in security research, a deep understanding of AI/ML systems and architectures, and a passion for red teaming, adversarial testing, and threat modeling in AI contexts.
Key Responsibilities
Conduct in-depth research on security vulnerabilities in LLMs and AI systems, including prompt injection, jailbreaks, data leakage, and adversarial attacks.
Design and execute offensive security assessments and red teaming campaigns for GenAI and ML-powered systems.
Identify and classify novel threat vectors targeting model inference, training pipelines, and model-serving architectures.
Collaborate with engineering and product teams to help design secure AI-powered features and define hardening strategies.
Develop proof-of-concepts (PoCs), technical whitepapers, or blog posts on emerging threats and best practices.
Monitor and analyze threat intelligence and academic research related to AI model security and supply chain risks.
Contribute to building internal tools for scanning, fuzzing, and automating LLM vulnerability discovery.
Represent Qualys in security and AI research communities through speaking, publishing, or standardization efforts.
Required Qualifications
5+ years of experience in security research, penetration testing, or exploit development, with a focus on application or cloud security.
Strong working knowledge of machine learning and LLM architectures (e.g., transformers, embeddings, fine-tuning, RAG).
Familiarity with GenAI-specific risks such as prompt injection, model evasion, hallucination-based exploits, data leakage, or model theft.
Experience in scripting and automation using Python (preferred) or similar languages for testing and PoC development.
Hands-on understanding of LLM deployment scenarios (e.g., OpenAI, HuggingFace, custom-hosted models) and threat surfaces involved.
Ability to analyze logs, API interactions, inference responses, and prompt chains to identify anomalous or risky behavior.
Strong analytical mindset, technical writing skills, and the ability to communicate complex vulnerabilities clearly.
Familiarity with responsible disclosure practices, bug bounty programs, or security research ethics.
Preferred Qualifications
Background in AI/ML security red teaming or adversarial ML.
Knowledge of vector database risks, insecure RAG pipelines, model fingerprinting, and AI model supply chain attacks.
Experience using or contributing to tools like LangChain, AutoGen, Guardrails.ai, LLM Guard, or Tracer.
Publications or presentations in conferences like Black Hat, DEF CON, USENIX, NeurIPS, or OWASP.
Familiarity with Secure SDLC, threat modeling frameworks (e.g., STRIDE, MITRE ATLAS), and AI-specific security checklists.
Our Work Environment
Collaborative & Transparent: We use virtual collaboration and pairing tools to share ideas openly. Siloed work is discouraged—teamwork is our strength.
Agile & Flexible: We focus on delivering incremental value, adapting processes only when they serve our goals.
Diverse & Inclusive: We believe in building teams with diverse perspectives, which fuels creativity and innovative problem-solving.
People-Focused: Our people are our most valuable asset. We invest in personal growth and align individual strengths to company objectives.
Why Join Us?
Leadership Impact: Be part of a security-first culture driving innovation in AI/ML safety at a global scale.
Cutting-Edge Technology: Work on real-world security issues in LLMs and shape the defense landscape of next-gen AI systems.
Professional Growth: Access a broad range of resources, mentorship, and exposure to cutting-edge research.
Inclusive Culture: Join a team that values diverse thinking, critical research, and collaboration.
Competitive Compensation: We offer a comprehensive benefits package, including healthcare, retirement plans, and more.
* Salary range is an estimate based on our InfoSec / Cybersecurity Salary Index 💰
Tags: Agile Analytics APIs Artificial Intelligence Automation Cloud Compliance Exploit Exploits Generative AI LLMs Machine Learning Offensive security OpenAI OWASP Pentesting POCs Python Qualys Red team Scripting SDLC Security assessment Threat intelligence Vulnerabilities
Perks/benefits: Career development Competitive pay Conferences Flex hours
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.