AI Policy projects conduct robust, multidisciplinary research to inform governments’ responses to AI.
FIG helps you build career capital. You can spend 5-10 hours a week working on foundational philosophical issues that can improve technical AI safety and mitigate catastrophic risks.
Our project leads are looking for postgraduate students across multiple fields (including computer science and philosophy), people with experience in machine learning, decision and game theory specialists, and well-read generalists with a track record of high-quality written work.
Scroll down to learn more. On this page, we list our focus area, project leads, and open projects.
Applications for the Winter 2025 FIG Fellowship are now open!
Apply here by ???!
Focus Areas
In the next few months, we will work on:
Economics & Society: measuring the economic effects of advanced AI and proposing ways to manage them.
National & International Policy: proposing how the US executive branch should regulate AI, determining the effect of regulation on AI releases, and helping countries coordinate on managing AI risks.
Writing & Journalism: concise and cutting analyses to guide the decision-making of DeepMind, Longview Philanthropy, and the AI safety community.
Miscellaneous: exploring wider questions including conducting metascience research for AI safety R&D and exploring the governance of agentic AI through case studies from finance.
Project Leads
-
Anton Korinek (University of Virginia) is working on measuring economic growth from an AI perspective.
Julian Jacobs (Google DeepMind) is conducting research into projecting AI economic impacts.
Deric Cheng (Convergence Analysis) is designing economic policies for a post-AGI economy.
Read more below.
-
Eli Lifland (AI Futures Project) is working on an executive branch playbook for AI
Saad Siddiqui & Isabella Dunn (Safe AI Forum) are exploring state-level international conditional commitments.
Markus Anderljung (GovAI) is running a project on AI model release delays and offering the chance to work on a range of independent AI governance research projects.
Rob Trager (Oxford Martin School) is running a project on the implications of advanced AI on international security, as well as a research agenda for gold standard AI risk management. He’s also co-leading one with Nick Caputo (Oxford Martin School) on the governance of open source AI.
Read more below.
-
Séb Krier (Google DeepMind) is seeking people to research and write in-depth AI policy memos for Google DeepMind.
Suryansh Mehta (FIG & Longview Philanthropy) is seeking writers to work on donor-facing memos on AI policy & grantmaking.
Dan Hendrycks (Center for AI Safety) is looking for someone interested in developing articles and essays on various topics in AI safety and related to society.
Read more below.
-
Mauricio B (DC Think Tank, Ex-OpenAI Contractor) is conducting metascience research to develop recommendations for how policymakers and private funders can effectively advance R&D on AI safety and governance.
Lewis Hammond (Cooperative AI Foundation) is investigating the technical and regulatory mechanisms used to monitor and stabilise algorithmic trading in financial markets, and distill key lessons for the governance of advanced AI agents.
Read more below.
Projects
Economics & Society
Measuring the economic effects of advanced AI and proposing ways to manage them.
National & International Policy
***Brief Description of the various projects that will appear in this section.***
Rob Trager
Senior Professor,
Oxford Martin School
Mitigating AI-Driven Power Concentration
The aim of this project is to examine AI-driven power concentration within a political context. By combining the technical dimensions of AI safety research with elements of political theory & science, this project will produce a comprehensive study of AI’s role in “autocratization”, democratic back-sliding, and democratisation.
Research Questions
How are political regimes and economic elites currently using AI to centralize power?
Which AI systems pose the highest risk and which regime types are most vulnerable?
How does AI-driven power concentration erode democratic norms and institutions?
What policies can mitigate these risks across contexts?
Outputs
A structured assessment framework for analyzing AI in authoritarian settings.
A catalog of high-risk use cases impacting democracies.
A comparative policy analysis of interventions and best practices.
Fellow Contributions
Conduct literature reviews, draft sections of papers and develop case studies.
Support comparative policy analysis by engaging in research at the intersection of AI safety, democracy, and governance.
Generate evidence-based interventions with real-world policy impact.
-
Academic Background: Advanced undergraduate or graduate student in machine learning, computer science, political science, international relations, public policy, law, or related fields, with interdisciplinary backgrounds particularly welcomed.
Core Knowledge: Familiarity with AI governance debates and strong understanding of political institutions and technology's interaction with governance; technical AI knowledge beneficial but not essential.
Research Capabilities: Excellent analytical and writing skills with proven ability to conduct literature reviews, synthesise findings, and produce structured, evidence-based research and policy insights.
Experience Required: Prior research experience (academic or policy-focused) essential, with experience in report writing, policy briefs, or academic papers highly valued; international organisation engagement desirable but not mandatory.
Working Style & Values: Self-directed researcher comfortable with minimal supervision whilst contributing to collaborative teams, genuinely concerned about democratic challenges and motivated to contribute meaningfully to this research field.
Miro Pluckebaum
Founder, Singapore AI Safety Hub
Strategy and Research Management, Oxford Martin AI Governance Initiative
Developing an AI Safety Agenda for Singapore
Singapore is emerging as an important actor in the AI governance landscape hosting international convenings between East and West, building-out one of the world's leading AI Safety Institutes and facilitating the creation of a commercial AI assurance ecosystem.
- This project aims to identify which additional concrete AI Safety projects across technical & governance research, ecosystem building, domestic policy and diplomacy the Singaporean ecosystem is particularly well placed to execute on.
- Doing so will require a combination of research and stakeholder interviews.
- Findings will be communicated in a public report as well as memos for stakeholders in civil service, academia and industry.
- If time remains (or as a follow-up project) we will aim to execute on one of the identified projects.
-
We are looking for an experienced project lead who can operate with significant autonomy. As such you should have existing experience driving research, policy memos or other relevant projects e.g. research workshops/ convening. Past experience working on projects related to AI or experience of the region is preferable but not required. Successful applicants might be PhD students or mid-career professionals in policy or technology companies. We may be able to source research assistants to support you in the project.
You will need: Excellent stakeholder management and communication skills, strong writing and research skills, project management experience.
Marta Ziosi
Postdoctoral Researcher, Oxford Martin AI Initiative
Problems in Frontier AI Risk Management
This project investigates a central question: What are the most pressing open problems in frontier AI risk management, and what approaches could effectively address them? Although leading AI developers have announced commitments to safety, there remains little clarity on what robust and operationalized practices should entail. Existing initiatives, such as the Intl. Scientific Report on the Safety of Advanced AI, focus on consolidating consensus, but less effort has gone into systematically identifying gaps and unresolved problems. Without this mapping, the field risks bottlenecks toward shared norms and standards for safe AI.
The aim of this project is to (1) map the landscape of risk management practices for frontier AI, (2) highlight gaps where practices are underdeveloped, and (3) propose candidate solutions that could guide pre-standards work and consensus. Examples include frameworks for incorporating societal values into risk criteria, methods for managing internal deployment risks, and approaches for assessing the effectiveness of mitigations.
As part of this effort, the FIG Fellow will focus on a well-scoped subset of the project. Depending on their interests, this may involve helping to sketch and categorize gaps in a particular area of risk management, or developing proposals around an identified gap.
-
We are looking for a master’s or PhD-level candidate with a solid understanding of risk management. Technical expertise would be a strong asset (though not absolutely required), and prior policy experience would also be advantageous. The ideal candidate is able to work independently with minimal supervision, demonstrates strong initiative, and is comfortable taking a proactive approach to advancing project goals.
-
Minimum length for the project output will be three months with the possibility to continue the research relationship depending on how the project goes. The ideal output will be either a contribution to a bigger paper or the co-authoring of a paper together depending on the interest of the FIG fellow.
Writing & Journalism
Concise and cutting analyses to guide the decision-making of DeepMind, Longview Philanthropy, and the AI safety community.
Suryansh, a FIG co-founder, presenting his research at the Spring 2024 Research Residency.