Meet our project leads!

Our project leads come from organisations at the cutting edge of AI safety and governance.

Policy

Saad Siddiqui

Safe AI Forum

Saad is an AI Governance Researcher at the Safe AI Forum (SAIF), fiscally sponsored by FAR.AI. Prior to his work at SAIF, Saad was a management consultant at Bain and Company, and a Winter Fellow at the Centre for the Governance of AI, where he focused on identifying areas of potential Sino-western cooperation on AI safety and governance. Saad holds a MA in Global Affairs from Tsinghua University and BA in Politics and Anthropology from the University of Cambridge.

Rob Trager

Oxford Martin School

Robert F. Trager is Co-Director of the Oxford Martin AI Governance Initiative, International Governance Lead at the Centre for the Governance of AI, and Senior Research Fellow at the Blavatnik School of Government at the University of Oxford. He is a recognised expert in the international governance of emerging technologies, diplomatic practice, institutional design, and technology regulation. He regularly advises government and industry leaders on these topics.

Mauricio B

DC Think Tank, ex-OpenAI

Mauricio specialises in the technical aspects of AI governance, with a focus on AI hardware (or "compute") and verification of compliance with rules on AI. His research aims to support policymakers' insight into the relevant landscape and policy options. Mauricio was a mentor for the ML Alignment and Theory Scholars (MATS) Program, and previously contracted with OpenAI’s Policy Research team. He completed a master’s in Computer Science at Stanford, with an AI specialisation.

Julian Jacobs

Google DeepMind

Julian Jacobs is a Researcher at Google DeepMind and PhD student at Oxford, specialising in comparative political economy. His research areas include artificial intelligence, the political implications of technological shocks, inequality, debt, and polarisation. He is a recipient of the Fulbright Scholarship. Prior to coming to Oxford, he received his MSc in Political Science and Political Economy from The London School of Economics and his BA in Philosophy, Politics, and Economics from Brown University. He previously worked for the Office of Barack Obama, The Brookings Institution, and the Center for AI Safety

Oliver Ritchie

GovAI

Oliver’s research is focused on UK policy, particularly how the government can make effective AI policy that supports economic growth and better public services while also protecting people from harm or unfair treatment. He has previously worked as a social researcher, helped academics maximise real-world impact, and—at HM Treasury—advised government ministers on topics including tax reform, international negotiations, corruption, decarbonisation, and the COVID-19 response.

Sam Manning

GovAI

Sam’s work focuses on measuring the economic impacts of frontier AI systems and designing policy options to help ensure that advanced AI can foster broadly shared economic prosperity. He previously conducted research at OpenAI and worked on a randomised controlled trial of a guaranteed income programme in the US. Sam has a MSc in International and Development Economics from the University of San Francisco.

Isabella Duan

Safe AI Forum

Isabella is an AI Governance researcher of the Safe AI Forum (SAIF), fiscally sponsored by FAR AI. Prior to her work at SAIF, Isabella interned at Google DeepMind and the Centre for the Governance of AI, developing social impact evaluations for frontier AI models. She is an MA candidate in Computational Social Science at the University of Chicago and a co-founder of the University’s Existential Risk Laboratory. She holds a BS in Philosophy, Politics, Economics from University College London.

Model Evaluation and Threat Research (METR)

METR is a non-profit that conducts empirical research to determine whether frontier AI models pose a significant threat to humanity. In the past, METR has established as common practice autonomous replication evals, worked with OpenAI and Anthropic to evaluate their models pre-release, and secured early commitments from labs (RSPs).

They have been mentioned by the UK government, Obama, and others, and are sufficiently connected to relevant parties (labs, governments, and academia) that any insights they uncover can quickly be leveraged.

Philosophy

Lewis Smith

Google DeepMind

Lewis is a Research Scientist at Google Deepmind in London, where he works on the Language Model Interpretability team, trying to better understand how exactly large language models work. Before he joined Deepmind, he was a machine learning engineer at Cohere, on the foundation model training team. Lewis studied for his DPhil at the University of Oxford with the Autonomous Intelligent Machines & Systems CDT, working on machine learning & related fields, and also holds degrees in physics from the University of Manchester.

Seb Farquhar

Google DeepMind

Seb is a Senior Research Scientist at Google DeepMind, working towards Artificial General Intelligence (AGI) alignment on the AGI Safety and Alignment Team. He is also an associate member Senior Research Fellow at OATML Oxford at the University of Oxford, working with Yarin Gal. Previously, he Ied the Global Priorities Project – a joint project of the Centre for Effective Altruism and the Future of Humanity Institute at the University of Oxford – which connected research from the University to policymakers. Prior to that, he worked at McKinsey.

Elliott Thornley

Global Priorities Institute, Oxford

Elliott Thornley is a Postdoctoral Research Fellow at the Global Priorities Institute and a Research Affiliate at the Center for AI Safety. He completed a PhD in Philosophy at Oxford University where he wrote about the moral importance of preventing global catastrophes and protecting future generations. He is now using techniques from decision theory to predict the likely behaviour of advanced artificial agents. He is also investigating ways we might ensure that these agents obey human instruction and allow themselves to be turned off.

David Althaus

Polaris Ventures

David is a researcher and grantmaker at Polaris Ventures. He has been involved in Effective Altruism since 2012, having worked for the Effective Altruism Foundation and the Center on Long-Term Risk. His research interests include suffering risks, malevolent actors, and intuitions about population ethics. David holds an MSc in psychology.

Teo Ajantaival

Centre for Reducing Suffering

Teo Ajantaival is a Researcher at the Center for Reducing Suffering. His work has explored fundamental questions related to ethics, value theory, and philosophy of wellbeing, as well as reasons to be careful about the practical implications of abstract formalisms. He holds an MA in Psychology from the University of Helsinki.

Brad Saad

Global Priorities Institute, Oxford

Brad is a Senior Research Fellow in philosophy at Oxford's Global Priorities Institute. His past research has focused on phenomenal consciousness and mental causation, their place in the world, and empirical constraints on theorising about them. More recently, he has been thinking about digital minds, catastrophic risks, and the long-term future.