University of York logo

Lecturer in AI Safety

University of York
On-site
York England United Kingdom
£45,413 - £55,755 GBP yearly

Role Description

Department

The vision of the Department of Computer Science at the University of York is to be internationally leading in education and research on the engineering of safe, ethical and secure computational systems. With this appointment, we are seeking to appoint an excellent researcher and teacher to join our High Integrity Systems Engineering (HISE) research group. HISE has a long-established reputation in safety assessment for complex computer-based systems.

Role

The post is a proleptic lectureship in the Department of Computer Science, associated with the Centre for Assuring Autonomy (CfAA). You will initially work as a Research Fellow in the CfAA for three years, before taking up a Lectureship in 2028. This unique and exciting fellowship is supported by a philanthropic gift from the Joan & Irwin Jacobs Foundation. The CfAA is a £10m partnership between Lloyd's Register Foundation and the University of York dedicated to pioneering research and innovation at the intersection of AI and safety. It has been running since January 2018 (initially as the Assuring Autonomy International Programme (AAIP)) and has made significant advances in safety assurance of AI and autonomous systems across a range of applications and domains that are already beginning to impact regulatory practice.

You will have the skills and experience to contribute to the core activities of the CfAA focusing on drawing together AI and safety analysis methods to provide a coherent approach, allowing the potential benefits of AI to be realised, whilst making a positive contribution to safety. To this end you will already be familiar with both safety analysis methods and AI, having publications which span the two disciplines. Whilst the post initially has a research focus, and you will contribute to the CfAA research team, you will have the option to contribute to teaching and administration to help the transition to the lectureship. As part of the application process, we invite applicants to present their vision for one of three priority areas:

  • Safety of Large Language Models
  • Integrated Safety and Security of AI
  • Safety of Embodied AI
  • PhD in AI, safety analysis or their combination, or equivalent experience
  • Specialist knowledge in AI, including mainstream methods, e.g. deep learning, and awareness of the state-of-the-art in key sub-topics, e.g. LLMs, GenAI or explainability
  • Knowledge of safety analysis, including software safety analysis methods
  • Proven ability to contribute to high quality research which is publicly evidenced
  • Ability to develop research objectives, projects and proposals
  • Willingness to work proactively with colleagues in other work areas/institutions

Skills, Experience & Qualification needed

  • PhD in AI, safety analysis or their combination, or equivalent experience
  • Specialist knowledge in AI, including mainstream methods, e.g. deep learning, and awareness of the state-of-the-art in key sub-topics, e.g. LLMs, GenAI or explainability
  • Knowledge of safety analysis, including software safety analysis methods
  • Proven ability to contribute to high quality research which is publicly evidenced
  • Ability to develop research objectives, projects and proposals
  • Willingness to work proactively with colleagues in other work areas/institutions

Interview date: 7th and/or 12th August

For informal enquiries: Professor Ibrahim Habli on ibrahim.habli@york.ac.uk

£45,413 to £55,755 per annum