Research Engineer/Research Scientist - Red Team (Alignment)
London, UK
GBP 29K-145K Mid-level Full Time Found 11d ago
Tasks
- Analyze and report results
- Build and execute alignment evaluations
- Conduct threat modeling and risk analysis
- Contribute to public research publications
- Coordinate holistic risk assessments
- Design and develop software tools
- Mentor external collaborators
- Research methods for detecting misalignment
- Translate risk concepts into hypotheses
Perks/Benefits
- Ample compute resources
- Career growth
- Collaborations with government and AI leaders
- Flexible hybrid working
- Generous leave
- Impactful work
- Learning/stipend allowances
- Parental leave
- Pension contributions
Skills/Tech-stack
AI Safety | Adversarial ML | Agents | Alignment research | Evaluation Frameworks | Experimental Design | Experimentation | LLM tools | ML | Open Source | Open-Source Development | Policy Communication | PyTorch | Python | Research coding | Risk Assessment | Software Engineering | Source development | Stress Testing | Threat modeling
Language: en |
Views: 0 |
Clicks: 0
Related jobs
-
AI Security | Adversarial Robustness | Automation | Code building | Data poisoningAutonomy in research | Conference funding | Generous leave | Hybrid work options | Impactful workMid-level Full TimeLondon, UK11d ago