The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. We’re in the heart of the UK government with direct lines to No. 10 (the Prime Minister's office), and we work with frontier developers and governments globally.
We’re here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, unique agility and international influence, this is the best place to shape both AI development and government action.
About the AI Security Institute
The UK AI Security Institute is the world's largest and best-funded team dedicated to understanding the capabilities and impacts of advanced AI and developing practical risk mitigations. We’re in the heart of the UK government with direct lines to No. 10 (the Prime Minister’s office), and we work with frontier developers and governments globally.
We’re here because governments are critical for advanced AI going well, and UK AISI is uniquely positioned to mobilise them. With our resources, agility and international influence, this is the best place to shape both AI development and government action to ensure AI systems are deployed safely and responsibly.
About the Team
The Cyber and Autonomous Systems Team (CAST) is looking to research and map the evolving frontier of AI capabilities and propensities to inform critical security decisions that reduce loss-of-control risks from frontier AI. We focus on preventing harms from high-impact cybersecurity capabilities and highly capable autonomous AI systems.
Our team is a blend of high-velocity generalists and technical staff, from organisations such as Meta, Amazon, Palantir, DSTL and Jane Street. Our recent work has included building model evaluations suites – such as Replibench - the world’s most comprehensive evaluation suite for understanding the risk of a model autonomously replicating itself over the internet. We also regularly test the cyber and other relevant capabilities of frontier models, before they are released, to understand their risks.
As AI systems become more advanced, the potential for misuse of their cyber capabilities may pose a threat to the security of organisations and individuals. Cyber capabilities also form common bottlenecks in scenarios across other AI risk areas such as harmful outcomes from biological and chemical capabilities and from autonomous systems. One approach to better understanding these risks is by conducting robust empirical tests of AI systems so we can better understand how capable they currently are when it comes to performing cyber security tasks. In this role, you'll join a strongly collaborative team to help create new kinds of capability and safety evaluations to evaluate frontier AI systems as they are released.
About the Role
This is a cybersecurity engineer position focused on building environments and challenges to benchmark the cyber capabilities of AI systems. You'll design cyber ranges, CTF-style tasks, and evaluation infrastructure that allows us to rigorously measure how well frontier AI models perform on real-world cybersecurity tasks.
This work belongs inside UK government because understanding AI cyber capabilities is critical to national security, and robust empirical testing requires coordination across government, industry, and international partners to inform policy decisions on AI safety.
You'll work closely with research engineers, infrastructure engineers, and machine learning researchers across AISI. As a small, fast-moving team building first-of-its-kind evaluation infrastructure, you'll be able to influence research directions, own whole pieces of work, and bring your ideas to the table.
Core Responsibilities
Example Projects
Impact
Your work will directly shape the UK government's understanding of AI cyber capabilities, inform safety standards for frontier AI systems, and contribute to the global effort to develop rigorous evaluation methodologies. The evaluations you build will help determine how advanced AI systems are assessed before deployment
What we are looking for
We're flexible on the exact profile and expect successful candidates will meet many (but not necessarily all) of the criteria below.
Essential
Preferred
Example backgrounds
Core requirements
What We Offer
Impact you couldn't have anywhere else
Resources & access
Growth & autonomy
Life & family*
*These benefits apply to direct employees. Benefits may differ for individuals joining through other employment arrangements such as secondments.
Salary
Annual salary is benchmarked to role scope and relevant experience. Most offers land between £65,000 and £145,000 made up of a base salary plus a technical allowance (take-home salary = base + technical allowance). An additional 28.97% employer pension contribution is paid on the base salary.
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety, robustness and advanced AI architectures.
The full range of salaries are available below:
Selection Process
In accordance with the Civil Service Commission rules, the following list contains all selection criteria for the interview process.
The interview process may vary candidate to candidate, however, you should expect a typical process to include some technical proficiency tests, discussions with a cross-section of our team at AISI (including non-technical staff), conversations with your team lead. The process will culminate in a conversation with members of the senior team here at AISI.
Candidates should expect to go through some or all of the following stages once an application has been submitted:
The Internal Fraud function of the Fraud, Error, Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud, or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed, or who would have been dismissed had they not resigned, for internal fraud. In instances such as this, civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way, the policy is ensured and the repetition of internal fraud is prevented. For more information please see - Internal Fraud Register.
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).