Our project leads are from organisations at the cutting edge of AI Safety.
AI Policy
Constellation
Astra Fellowship
January 2026
Constellation Research Center is an independent research organisation dedicated to safely navigating the development of transformative artificial intelligence by fostering collaboration across the global AI safety ecosystem. Founded in 2023 and based in Berkeley, California, Constellation brings together researchers from nonprofits, academia, industry, and government through its flagship programmes including the Astra Fellowship and various research initiatives focused on reducing risks from advanced AI systems. The centre has successfully placed dozens of participants from its programmes into full-time safety roles at leading organisations such as Anthropic, OpenAI, Google DeepMind, METR, and government AI safety institutes.
Risto Uuk
Head of EU Policy and Research
Future of Life Institute
Risto leads policy and research efforts at the Future of Life Institute to maximise the societal benefits of increasingly powerful AI systems, whilst also serving as a PhD Researcher at KU Leuven where he researches the assessment and mitigation of systemic risks posed by general-purpose AI. He runs the biweekly EU AI Act Newsletter with over 45,000 subscribers and has established one of the top resources for information about the AI Act.
Marta Ziosi
Postdoctoral Researcher
Oxford Martin AI Governance Initiative
Marta is a Postdoctoral Researcher at the Oxford Martin AI Governance Initiative, where her research focuses on standards for frontier AI. She was selected as one of the Vice-Chairs to draft the EU General-Purpose AI Code of Practice, which will establish rules for the implementation of the EU AI Act. She is also the founder of AI for People, a non-profit organisation whose mission is to put technology at the service of people.
Miro Pluckebaum
Strategy & Research Manager
Oxford Martin AI Governance Initiative
Miro supports the Oxford Martin AI Governance Initiative on strategy and research management and is a Programme Specialist at the Centre for the Governance of AI. He is the Founder of the Singapore AI Safety Hub, a platform for AI safety governance, technical research, and ecosystem development in Asia. Previously, he spent 10 years working on AI products and governance across enterprises and startups in Europe and Asia.
Robert Trager
Senior Professor
Oxford Martin AI Governance Initiative
Robert is Co-Director of the Oxford Martin AI Governance Initiative, International Governance Lead at the Centre for the Governance of AI, and Senior Research Fellow at the Blavatnik School of Government at the University of Oxford. He is a recognised expert in the international governance of emerging technologies, diplomatic practice, institutional design, and technology regulation. He regularly advises government and industry leaders and has published extensively in leading journals including the American Political Science Review, International Organization, and Foreign Affairs.
Richard Mallah
Executive Director & Founder
Center for AI Risk Management & Alignment
Richard founded and leads CARMA, where he directs projects in risk assessment, policy strategy, and technical safety. He also serves as Principal AI Safety Strategist at the Future of Life Institute, which he joined in 2014. With over twenty years of experience in machine learning and AI across industry roles including algorithms research, research management, and strategy consulting, he has focused on advanced AI safety since 2010 and developed frameworks for understanding pathways to AGI and governance recommendations.
Nick Caputo
Legal Researcher
Oxford Martin School
Nick is a Legal Researcher at the Oxford Martin AI Governance Initiative, where he works on domestic and international regulation of AI and how AI and the law can inform and shape each other. His work focuses on legal alignment, risk management, open source AI, and building governance institutions for the AI era. He graduated with honours from Harvard Law School, where he focused on law and technology, constitutional law, and public international law, and has experience with strategic litigation and policy advocacy in the US and EU.
Jonathan Birch
Professor of Philosophy
London School of Economics
Jonathan is a Professor of Philosophy at LSE and Director of The Jeremy Coller Centre for Animal Sentience. He is the Principal Investigator on the Foundations of Animal Sentience project and was the lead author of a government report that led to cephalopods and decapod crustaceans being recognised as sentient under UK law. His recent book "The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI" (2024) examines sentience across biological and artificial systems, and he is a co-author of the New York Declaration on Animal Consciousness.
Deric Cheng
AI Policy Researcher
Convergence Analysis
Deric is an AI Policy Researcher at Convergence Analysis and Director of Research for the Windfall Trust, a non-profit focused on ensuring that the economic benefits of advanced AI are shared by everyone. He leads the AGI Social Contract, a consortium of experts proposing concrete strategies to design a new social contract for a post-AGI society. Previously, he was the fifth software engineer at Alchemy (now valued at $10.2 billion) and a rapid prototyping researcher at Google's Interaction Lab, where he invented and built the first real-time translation feature for wireless earbuds.
Andrew Sutton
Researcher
Oxford Martin AI Governance Initiative
Andrew is a researcher at the Oxford Martin AI Governance Initiative, where he works on the Social Impact of Emerging Technologies research programme. His work focuses on identifying the key social, ethical, and policy dimensions that accompany the adoption of emerging technologies, examining disparities in access and benefits, and analysing governance mechanisms to promote inclusive technological progress. He brings interdisciplinary expertise to provide insights for policymakers, industry leaders, and civil society organisations on ensuring the benefits of emerging technologies are shared equitably across society.
Liam Patell
Research Scholar
GovAI
Liam researches US AI policy, with a focus on national security strategy and international competition. Before joining GovAI he was a fellow with the AI Futures Project and Convergence Analysis. He holds a BPhil in Philosophy from the University of Oxford.
Anthropic
Anthropic Fellowship
January 2026
Anthropic is a leading AI safety research company focused on developing robust governance frameworks and safety methodologies to ensure advanced AI systems remain beneficial and controllable as they become more capable. The company pioneers research in AI alignment, interpretability, and Constitutional AI techniques, whilst actively engaging with policymakers, governments, and international bodies to shape responsible AI governance standards and regulatory approaches. Through its safety-first approach to frontier AI development and its commitment to transparent research sharing, Anthropic works to establish the technical and policy foundations necessary for humanity to safely navigate the development of transformative artificial intelligence.
Eleni Angelou
PhD Candidate
CUNY Graduate Center
Eleni is a PhD candidate at the Graduate Center CUNY, where her dissertation focuses on modelling the cognition of large language models through tools from philosophy of science and cognitive science. She has worked extensively in technical AI safety, particularly on evaluations of science-related capabilities and interpretability, and was a Research Lead for the AI Science team at AI Safety Camp 2023. In addition to her research, she teaches a course on AI and the future of humanity at Queens College CUNY.
Ben Henke
Research Fellow
Institute of Philosophy, University of London
Ben is a Research Fellow at the Institute of Philosophy at the School of Advanced Study, University of London, where he serves as Associate Director of the London AI and Humanity Project. He is also a Research Associate in Artificial Intelligence at Imperial College London's Department of Computing, working with Murray Shanahan as part of Cambridge's Centre for the Future of Intelligence. His research specialises in the philosophy of cognitive science, artificial intelligence, and epistemology, with recent publications in Philosophy of Science and The Journal of Philosophy.
Hayley Clatterbuck
Senior Researcher
Rethink Priorities
Hayley is a Senior Researcher on the Worldview Investigations team at Rethink Priorities and a Visiting Associate Professor at UCLA. She specialises in philosophy of biology, philosophy of cognitive science, and general philosophy of science, with particular expertise in evolutionary theory, concept learning, and animal cognition. She previously held academic positions at the University of Wisconsin-Madison and has published extensively on topics including evolutionary mechanisms, scientific theory change, and conceptual development.
Philosophy for Safe AI
Elliott Thornley
Postdoctoral Associate
Massachusetts Institute of Technology
Elliott is a Research Fellow at the Global Priorities Institute, University of Oxford, and will be an Assistant Professor of Philosophy at NUS from August 2026. His research focuses on AI alignment and decision theory, using ideas from decision theory to design and train safer artificial agents. He is known for his work on the "shutdown problem" in AI safety and also conducts research in normative ethics, particularly on population ethics and the moral importance of future generations.
Lewis Hammond
Co-Director
Cooperative AI Foundation
Lewis is Co-Director of the Cooperative AI Foundation and a DPhil candidate in Computer Science at the University of Oxford. He is also affiliated with the Centre for the Governance of AI and is a 'Pathways to AI Policy' Fellow at the Wilson Center. His research concerns safety and cooperation in multi-agent systems, motivated by ensuring that AI and other powerful technologies are developed and governed safely and democratically. He has a background in mathematics and philosophy from Warwick and artificial intelligence from Edinburgh.
Jeff Sebo
Affiliated Professor of Bioethics, Medical Ethics, Philosophy, and Law
New York University
Jeff Sebo works primarily on moral philosophy, legal philosophy, and philosophy of mind; animal minds, ethics, and policy; AI minds, ethics, and policy; global health and climate ethics and policy; and global priorities research. He is author of The Moral Circle (2025) and Saving Animals, Saving Ourselves (2022) and co-author of Chimpanzee Rights (2018) and Food, Animals, and the Environment (2018).
Prior to this post, Jeff worked as Research Assistant Professor of Philosophy and Associate Director of the Parr Center for Ethics at the University of North Carolina at Chapel Hill (2015-2017), as Postdoctoral Fellow in Bioethics at the National Institutes of Health (2014-2015), and as Assistant Professor / Faculty Fellow in Animal Studies and Environmental Studies at New York University (2011-2014).
AI Sentience
Patrick Butlin
Senior Research Lead
Eleos AI Research
Patrick is a Senior Research Lead at Eleos AI Research and was previously a Research Fellow at the Global Priorities Institute and Future of Humanity Institute at Oxford University. He is a philosopher with research interests in AI consciousness, agency, and moral patienthood. He co-authored the influential 2023 paper "Consciousness in Artificial Intelligence" and was a key contributor to Eleos's flagship report "Taking AI Welfare Seriously", which argues that AI companies need to prepare for the possibility that future systems may be conscious.
Rosie Campbell
Managing Director
Eleos AI Research
Rosie is the Managing Director of Eleos AI Research, a non-profit dedicated to understanding and addressing the potential wellbeing and moral patienthood of AI systems. Previously, she was a policy researcher at OpenAI where she led the Policy Frontiers team and worked on AGI readiness, and was Head of Safety-Critical AI at the Partnership on AI. She was also Assistant Director of the Center for Human-Compatible AI at UC Berkeley and a Research Engineer at BBC R&D, with degrees in Physics and Computer Science.
Robert Long
Executive Director & Co-Founder
Eleos AI Research
Robert is the Executive Director and Co-Founder of Eleos AI Research, a research organisation dedicated to understanding and addressing the potential wellbeing and moral patienthood of AI systems. He has a PhD in Philosophy from NYU and previously worked as a researcher at the Center for AI Safety and at the Future of Humanity Institute at Oxford University. He is a leading researcher on AI consciousness and co-authored the influential report "Taking AI Welfare Seriously" and the paper "Consciousness in Artificial Intelligence".
Seth Lazar
Professor of Philosophy
Australian National University
Seth is a Professor of Philosophy at the Australian National University and leads the Machine Intelligence and Normative Theory (MINT) Lab, where he directs research on the moral and political philosophy of AI. He is an Australian Research Council Future Fellow, a Distinguished Research Fellow at the University of Oxford Institute for Ethics in AI, and a Senior AI Advisor to the Knight First Amendment Institute. He was General Co-Chair for the ACM Fairness, Accountability, and Transparency conference 2022 and gave the 2023 Tanner Lectures on AI and Human Values at Stanford University.
Brad Saad
Senior Research Fellow
University of Oxford
Brad is a Senior Research Fellow in philosophy at Oxford's Global Priorities Institute, where he works on digital minds, catastrophic risks, and the long-term future. His research focuses on phenomenal consciousness and mental causation, with recent work examining the potential for suffering in future digital minds and the moral status of AI systems. He has co-authored influential papers on digital suffering and AI alignment challenges, and his work explores the intersection of philosophy of mind with existential risk research.
Derek Shiller
Senior Researcher
Rethink Priorities
Derek is a Senior Researcher at Rethink Priorities with a PhD in philosophy from Princeton University. He works on the Worldview Investigations team, focusing on digital consciousness research and the moral status of artificial intelligence systems. He has written extensively on topics in metaethics, consciousness, and the philosophy of probability, and is currently leading projects to estimate the probabilities of consciousness in near-future AIs and develop frameworks for assessing AI moral patienthood.
Janet Pauketat
Research Fellow
Sentience Institute
Janet is a Research Fellow at Sentience Institute with a PhD in Psychological and Brain Sciences from UC Santa Barbara and an MRes from the University of St Andrews. She designs and leads the nationally representative Artificial Intelligence, Morality, and Sentience (AIMS) survey, which tracks U.S. public opinion on AI safety, artificial sentience, and the moral consideration of AIs. Her research focuses on human-AI interaction, digital minds, moral circle expansion, and the psychological predictors of moral consideration for artificial intelligences.