Meet our project leads!

Our project leads come from organisations at the cutting edge of AI safety and governance.

AI Policy

Séb Krier

Policy Development & Strategy

Google DeepMind

Séb Krier is an artificial intelligence policy expert, adviser and attorney working to improve the regulatory and legal frameworks governing AI and studying the impacts of emerging technologies on democracy and human rights. He works in the Policy Development & Strategy team at Google DeepMind.

Séb joined the UK Government’s Office for Artificial Intelligence as Head of Regulation in 2018. His work involved leading the world’s first comprehensive review of AI in the public sector and designing policies to address novel issues such as the oversight of automated decision-making systems. He also represented the United Kingdom at various multilateral forums such as the European Commission, the World Economic Forum, the OECD, and the Digital Nations.

Julian Jacobs

Researcher

Google DeepMind

Julian is a Researcher at Google DeepMind and PhD student at Oxford, specialising in comparative political economy. His research areas include artificial intelligence, the political implications of technological shocks, inequality, debt, and polarisation. He is a recipient of the Fulbright Scholarship. Prior to coming to Oxford, he received his MSc in Political Science and Political Economy from the London School of Economics and his BA in Philosophy, Politics, and Economics from Brown University. He previously worked for the Office of Barack Obama, The Brookings Institution, and the Center for AI Safety.

Dan Hendrycks

Executive Director

Center for AI Safety

Dan is the executive director, and co-founder, of the Center for AI Safety.

He received his PhD from UC Berkeley where he was advised by Dawn Song and Jacob Steinhardt.

He is an advisor to xAI and Scale AI.

Markus Anderljung

Director of Policy and Research

GovAI

Markus’s work focuses on measuring the economic impacts of frontier AI systems and designing policy options to help ensure that advanced AI can foster broadly shared economic prosperity. He previously conducted research at OpenAI and worked on a randomised controlled trial of a guaranteed income programme in the US. Markus has a MSc in International and Development Economics from the University of San Francisco.

Rob Trager

Senior Professor

Oxford Martin School

Rob is Co-Director of the Oxford Martin AI Governance Initiative, International Governance Lead at the Centre for the Governance of AI, and Senior Research Fellow at the Blavatnik School of Government at the University of Oxford. He is a recognised expert in the international governance of emerging technologies, diplomatic practice, institutional design, and technology regulation. He regularly advises government and industry leaders on these topics.

Nick Caputo

Legal Researcher

Oxford Martin School

Nick is a legal researcher at the Oxford Martin AI Governance Initiative where he works on domestic and international regulation of AI as well as how AI and the law can inform and shape each other.

Prior to taking up his position at the Initiative, Nick graduated with honors from Harvard Law School where he focused on law and technology, constitutional law, and public international law.

Anton Korinek

Professor

University of Virginia

Anton Korinek is a Professor in the Department of Economics and at the Darden School of Business at the University of Virginia as well as a David M. Rubenstein Fellow at the Brookings Institution. His current research and teaching analyze the implications of Artificial Intelligence for business, for the economy, and for the future of society. He is also a Research Associate at the NBER, at the CEPR and at the Oxford Centre for the Governance of AI, and he is an editor of the Oxford Handbook of AI Governance.

Eli Lifland

Researcher

AI Futures Project

Eli works on scenario forecasting and specializes in AI capability predictions. He also co-founded and advises Sage, which builds interactive AI explainers. He previously worked on Elicit, an AI-powered research assistant, and co-created TextAttack, a Python framework for adversarial examples in text. He placed first on the RAND Forecasting Initiative all-time leaderboard.

Mauricio B

Technology & Security Policy Fellow

DC Think Tank, Ex-OpenAI Contractor

Mauricio specialises in the technical aspects of AI governance, with a focus on AI hardware (or "compute") and verification of compliance with rules on AI. His research aims to support policymakers' insight into the relevant landscape and policy options. Mauricio was a mentor for the ML Alignment and Theory Scholars (MATS) Program, and previously contracted with OpenAI’s Policy Research team. He completed a master’s in Computer Science at Stanford, with an AI specialisation.

Saad Siddiqui

AI Policy Researcher

Safe AI Forum

Saad is an AI Governance Researcher at the Safe AI Forum (SAIF). Prior to his work at SAIF, Saad was a management consultant at Bain and Company and a Winter Fellow at the Centre for the Governance of AI, where he focused on identifying areas of potential Sino-western cooperation on AI safety and governance. Saad holds a MA in Global Affairs from Tsinghua University and BA in Politics and Anthropology from the University of Cambridge.

Isabella Duan

AI Policy Researcher

Safe AI Forum

Isabella is an AI Governance researcher of the Safe AI Forum (SAIF). Prior to her work at SAIF, Isabella interned at Google DeepMind and the Centre for the Governance of AI, developing social impact evaluations for frontier AI models. She is an MA candidate in Computational Social Science at the University of Chicago and a co-founder of University’s Existential Risk Laboratory. She holds a BS in Philosophy, Politics, Economics from University College London.

Suryansh Mehta

Co-Founder / Research & Communications

FIG / Longview Philanthropy

Suryansh specialises in communicating cutting-edge research on emerging technologies to Longview’s advisees. His prior work includes contributing to and editing the textbook AI Safety, Ethics, and Society with Dan Hendrycks at the Center for AI Safety and co-founding Future Impact Group, an organisation dedicated to field-building through its research fellowship program. On the research front, Suryansh has worked extensively on AI safety and governance at the Centre for the Governance of AI and the University of Oxford, where he was awarded two degrees: a first-class BA in Philosophy, Politics, and Economics and an MPhil in Economics with distinction.

Deric Cheng

AI Policy Researcher

Convergence Analysis

Deric leads the Governance Recommendations Team at Convergence Analysis, focusing on AI policy, chip registries, and economic impacts. Previously, he worked at Alchemy, where he built web3.university, and at Google X, where he helped develop real-time translation for Pixel Buds. He organizes an annual camping & music festival and holds a BS in Computer Science from Princeton.

Philosophy for Safe AI

Andreas Mogensen

Senior Research Fellow

GPI, Oxford

Andreas is a Senior Research Fellow in philosophy at Oxford’s Global Priorities Institute. Before coming to GPI, he worked as a Tutorial Fellow at Jesus College, and was an Examination Fellow at All Souls College from 2010 to 2015. His current research interests are primarily in normative and applied ethics. His previous publications have addressed topics in meta-ethics and moral epistemology, especially those associated with evolutionary debunking arguments.

Patrick Butlin

Postdoctoral Research Fellow

GPI, Oxford

Patrick is a philosopher of mind and cognitive science, and a researcher at the Global Priorities Institute at the University of Oxford. Much of his research is on mental capacities and attributes in artificial intelligence, such as consciousness, agency, and understanding. He explains, “AI is intrinsically interesting, but thinking about artificial implementations also helps us to explore mechanistic models of these capacities and attributes as they exist in humans and other animals.”

Brad Saad

Senior Research Fellow

GPI, Oxford

Brad is a Senior Research Fellow in philosophy at Oxford's Global Priorities Institute. His past research has focused on phenomenal consciousness and mental causation, their place in the world, and empirical constraints on theorising about them. More recently, he has been thinking about digital minds, catastrophic risks, and the long-term future.

Derek Shiller

Senior Researcher

Rethink Priorities

Derek is a Senior Researcher at Rethink Priorities. He has a PhD in philosophy from Princeton University, and a degree in Mathematics and Philosophy from Yale University.

Derek has written on topics in metaethics, consciousness, and the philosophy of probability. Before joining Rethink Priorities, Derek worked as the lead web developer for The Humane League.

Elliott Thornley

Research Fellow

GPI, Oxford

Elliott is a Postdoctoral Research Fellow at the Global Priorities Institute and a Research Affiliate at the Center for AI Safety. He completed a PhD in Philosophy at Oxford University where he wrote about the moral importance of preventing global catastrophes and protecting future generations. He is now using techniques from decision theory to predict the likely behaviour of advanced artificial agents. He is also investigating ways we might ensure that these agents obey human instruction

Lewis Hammond

Co-Director

Cooperative AI Foundation

Lewis is a DPhil candidate in computer science at the University of Oxford and co-director of the Cooperative AI Foundation. He is affiliated with the Centre for the Governance of AI and a ‘Pathways to AI Policy’ Fellow at the Wilson Center. His research concerns safety and cooperation in multi-agent systems, motivated by the problem of ensuring that AI and other powerful technologies are developed and governed safely and democratically. Before coming to Oxford he obtained a BSc in mathematics and philosophy from the University of Warwick and an MSc in artificial intelligence from the University of Edinburgh.

Chi Nguyen

Researcher

Independent

Chi is working independently on making AI systems reason safely about decision theory and acausal interactions, collaborating with Caspar Oesterheld and Emery Cooper. Before doing independent research, Chi worked for the Center on Long-Term Risk on s-risk reduction projects (hiring, community building, and grantmaking.)

Chi studied PPE at Oxford (2018-2021) and psychology in Freiburg (2015-2018).

Caspar Oesterheld

PhD Student

Foundations of Cooperative AI Lab

Caspar is a Computer Science PhD student at the Foundations of Cooperative AI Lab (FOCAL), at Carnegie Mellon University. He is supervised by Vincent Conitzer.

Caspar is interested in foundational topics in theoretical computer science, game theory and AI safety.

He is co-leading a project on training AIs to aid decision theory & acausal research, with Chi Nguyen and Emery Cooper.

Emery Cooper

Research Associate

Carnegie Mellon University

Emery is a research associate at the Department of Computer Science at Carnegie Mellon University.

Emery received an MMath in mathematics and statistics from the University of Cambridge, and studied biostatistics at the MRC Biostatistics Unit

She is co-leading a project on training AIs to aid decision theory & acausal research, with Chi Nguyen and Caspar Oesterheld.

Lucius Caviola

Senior Research Fellow

GPI, Oxford

Lucius is a moral psychologist at the University of Oxford, studying the societal impact of artificial intelligence. His research examines how human values and decision-making shape—and are shaped by—AI systems. He aims to generate insights that help us understand and navigate the large-scale risks and opportunities that AI introduces. Beyond AI, his work focuses on prosociality, investigating how people expand their moral boundaries—extending compassion effectively beyond their immediate communities to distant populations, animals, and even potential future digital minds.

Leonard Dung

Postdoctoral Researcher

Ruhr-University Bochum

Leonard is a postdoctoral researcher ("Wissenschaftlicher Mitarbeiter") at the Chair for Philosophy of Mind, located at the Ruhr-University Bochum. From April 2023 to September 2024, he worked at the Centre for Philosophy and AI Research, located at the University Erlangen-Nürnberg and directed by Prof. Vincent Müller. In April 2023, he passed his PhD defence in Philosophy at the Ruhr-University, supervised by Prof. Albert Newen and Prof. Colin Allen. I hold master's degrees in Philosophy (University of Bonn, 2021) and Mind, Language, and Embodied Cognition (University of Edinburgh, 2020).

Atoosa Kasirzadeh

Assistant Professor

Ruhr-University Bochum

Atoosa is a philosopher and AI researcher with a track record of publications on the ethics and governance of AI and computing. In December 2024, she joined Carnegie Mellon University as a tenure track Assistant Professor with joint affiliations in the Philosophy and Software & Societal Systems departments. Previously, she was a visiting faculty at Google Research, a Chancellor’s Fellow and Research Lead at the University of Edinburgh’s Centre for Technomoral Futures, a Group Research Lead at the Alan Turing Institute, a DCMS/UKRI Senior Policy Fellow, and a Governance of AI Fellow at Oxford. Atoosa holds two doctoral degrees: a Ph.D. in Philosophy of Science and Technology from the University of Toronto and a Ph.D. in Mathematics (Operations Research) from the École Polytechnique de Montréal.

METR

Various Researchers

Model Evaluation & Threat Research

METR is a non-profit that conducts empirical research to determine whether frontier AI models pose a significant threat to humanity. In the past, METR has established as common practice autonomous replication evals, worked with OpenAI and Anthropic to evaluate their models pre-release, and secured early commitments from labs (RSPs). They have been mentioned by the UK government, Obama, and others, and are sufficiently connected to relevant parties (labs, governments, and academia) that any insights they uncover can quickly be leveraged.