AI Policy projects conduct robust, multidisciplinary research to inform governments’ responses to AI.
FIG helps you build career capital. You can spend 5-10 hours a week working on foundational philosophical issues that can improve technical AI safety and mitigate catastrophic risks.
Our project leads are looking for postgraduate students across multiple fields (including computer science and philosophy), people with experience in machine learning, decision and game theory specialists, and well-read generalists with a track record of high-quality written work.
Scroll down to learn more. On this page, we list our focus area, project leads, and open projects.
Applications for the Spring 2025 FIG Fellowship are now open!
Apply here by EOD Saturday 8 March!
Focus Areas
In the next few months, we will work on:
Economics & Society: measuring the economic effects of advanced AI and proposing ways to manage them.
National & International Policy: proposing how the US executive branch should regulate AI, determining the effect of regulation on AI releases, and helping countries coordinate on managing AI risks.
Writing & Journalism: concise and cutting analyses to guide the decision-making of DeepMind, Longview Philanthropy, and the AI safety community.
Miscellaneous: exploring wider questions including conducting metascience research for AI safety R&D and exploring the governance of agentic AI through case studies from finance.
Project Leads
-
Anton Korinek (University of Virginia) is working on measuring economic growth from an AI perspective.
Julian Jacobs (Google DeepMind) is conducting research into projecting AI economic impacts.
Deric Cheng (Convergence Analysis) is designing economic policies for a post-AGI economy.
Read more below.
-
Eli Lifland (AI Futures Project) is working on an executive branch playbook for AI
Saad Siddiqui & Isabella Dunn (Safe AI Forum) are exploring state-level international conditional commitments.
Markus Anderljung (GovAI) is running a project on AI model release delays and offering the chance to work on a range of independent AI governance research projects.
Rob Trager (Oxford Martin School) is running a project on the implications of advanced AI on international security, as well as a research agenda for gold standard AI risk management. He’s also co-leading one with Nick Caputo (Oxford Martin School) on the governance of open source AI.
Read more below.
-
Séb Krier (Google DeepMind) is seeking people to research and write in-depth AI policy memos for Google DeepMind.
Suryansh Mehta (FIG & Longview Philanthropy) is seeking writers to work on donor-facing memos on AI policy & grantmaking.
Dan Hendrycks (Center for AI Safety) is looking for someone interested in developing articles and essays on various topics in AI safety and related to society.
Read more below.
-
Mauricio B (DC Think Tank, Ex-OpenAI Contractor) is conducting metascience research to develop recommendations for how policymakers and private funders can effectively advance R&D on AI safety and governance.
Lewis Hammond (Cooperative AI Foundation) is investigating the technical and regulatory mechanisms used to monitor and stabilise algorithmic trading in financial markets, and distill key lessons for the governance of advanced AI agents.
Read more below.
Projects
Economics & Society
Measuring the economic effects of advanced AI and proposing ways to manage them.
Anton Korinek
Professor,
University of Virginia
Measuring Economic Growth From An AI Perspective
The rise of artificial intelligence requires us to fundamentally rethink how we measure the economy. Current national accounting frameworks are human-centric, solely focusing on consumption and investment from a human perspective. However, an increasing share of economic activity promotes artificial intelligence. This project aims to develop new economic measurement frameworks that capture growth from both human and AI perspectives.
-
The ideal candidate would have:
Interest in AI economics and measurement challenges
Strong data analysis and econometric skills
Familiarity with national accounts and price indices
Background knowledge in macroeconomics and economic measurement
Programming skills for handling large datasets
The project builds on Korinek's recent paper "The Rise of Artificially Intelligent Agents" and aims to provide empirical measurements to complement its theoretical framework.
-
The goal of the project is to develop a set of indicators measuring the AI economy and contrasting it to traditional measures of economic growth in a blog post.
Julian Jacobs
Researcher,
Google DeepMind
AI Economic Impacts
Conducting research into projecting the impacts of AI on work and workers using a variety of empirical (observational and experimental) methods. A primary area of research focus is on work, worker retraining, programs, and the social-psychological value of employment.
-
Interest in AI economic impacts.
Strong data science background.
Advanced skills in R.
-
This FIG Fellowship may be funded.
Deric Cheng
AI Policy Researcher,
Convergence Analysis
Designing Economic Policies For A Post-AGI Economy
We are looking to work with Fellows to produce novel research to answer the question: What are economic policies and interventions that governments should adopt during the upcoming AI economic transition?
This sort of deep research is extremely neglected and virtually non-existent. By the end of this research program, we intend to collectively develop the first-ever collection of deep research articulating a vision for how governments should respond to the widespread impact of AI systems on the economy. We expect that this compendium will immediately become the seminal work in this highly neglected domain, and kickstart the international conversation around plausible solutions to a post-AGI economy.
-
We are looking for Masters, PhD students, and mid-career professionals interested in economic policy research.
An economics (and particularly econometrics) background is a big plus.
We'd like people who are self-motivated and can conduct research largely independently but with lightweight support / guidance from mentors.
-
We'd like to have you independently publish a paper researching and evaluating a concrete economic policy for a post-AGI economy.
Timeline would be over roughly 12 weeks.
National & International Policy
Proposing how the US executive branch should regulate AI, determining the effect of regulation on AI releases, and helping countries coordinate on managing AI risks.
Eli Lifland
Researcher,
AI Futures Project
An Executive Branch Playbook For AGI
We are working on a playbook for how the US executive branch should react to AGI. You'd contribute to one or more subquestions involved. For example, you might research the most likely international coordination mechanisms and how the US should prepare for potential international agreements. Or you might research what oversight mechanisms the executive branch should use to steer AI companies (e.g. via government contracts).
-
Professional with strong experience in or around the US executive branch. Prepared to work as part of a team on the broader project. Substantial technical understanding of frontier AI training and alignment is ideal but not required.
-
We're not sure, but probably we will aim for a paper by mid-year.
Saad Siddiqui
AI Policy Researcher,
Safe AI Forum
Isabella Duan
AI Policy Researcher,
Safe AI Forum
International If-Then Commitments
Two things are true about advanced AI. (1) The capabilities of AI systems are improving rapidly and (2) they could change the world as we know it. At the same time, there is limited consensus on how exactly these rapidly improving capabilities could lead to harm. The scenarios policymakers and scientists are concerned about range from AI agents speeding up AI R&D and leading to a risky intelligence explosion, to widespread proliferation of open-source AI systems that allow rogue actors to build weapons of mass destruction.
Companies have approached this problem by introducing Frontier Safety Frameworks, which are a series of conditional commitments tied to specific risk scenarios. The goal of this research project would be to explore state-level international conditional commitments. This will involve evaluating whether conditional commitments can be effective at the international level, spelling out the key design criteria for such commitments, potential risk scenarios, and sample if-then commitments tied to the different risk scenarios.
-
Candidates should be advanced undergraduates or graduate-level and able to work autonomously with minimal supervision. They should know how to use citation managers like Zotero and be willing to work in a team with 1-2 other researchers. They should also have knowledge of or be willing to quickly get up to speed on frontier safety policies and the existing international AI agreements literature.
-
The goal would be to co-author an exploratory blogpost in the first month, and a full paper by the end of the 12 weeks (though with some review and final publishing likely at the end of the 3 months).
Markus Anderljung
Director of Policy and Research,
GovAI
Independent AI Governance Research
Here is a list of projects that Markus would be excited to see more work on (excluding “What is The Business Strategy Behind Releasing Model Weights?”). This is an exciting opportunity to shape a project as a FIG Fellow from start to finish, where you'll be able to develop specific career capital while making progress on open questions in AI governance.
-
Ideal candidates will have:
Ability to define key questions, choose effective methodologies, and generate meaningful insights.
Familiarity with the relevant subjects, e.g. regulation, governance mechanisms, or institutional decision-making in AI.
A drive to lead their own research project, structuring their work and pushing it forward with minimal oversight.
Strong writing skills to produce impactful research outputs.
Most of your interaction with Markus will be through asynchronous feedback (on an approximately fortnightly basis), or meetings to help give high-level direction of work, and add context, or make introductions where helpful.
-
This up to the FIG Fellow, but the more detailed their description of output and how they'll achieve it, the better.
Investigating AI Model Release Delays In The UK And EU
This project will examine whether AI models and products are being released later—or not at all—in the UK and EU compared to the US, and why this might be happening. Over the past year, there have been reports of AI companies delaying the deployment of AI systems in the EU and UK, or not deploying them at all — the FIG Fellow will clarify timelines, identify causes, and suggest concrete actions that regulators can take to resolve the delays. You can find more information in this document.
-
The ideal candidate will have:
The attention to detail required to track AI model release timelines across regions, verify claims, and separate speculation from fact.
Familiarity with EU, UK, and/or US tech regulation, particularly around AI governance, compliance, and market access.
Capacity to assess competing explanations for AI deployment delays, from legal and bureaucratic hurdles to strategic lobbying.
Strong writing skills to present findings concisely and recommend actionable policy insights.
Comfort with leading their own research, structuring the project, and driving it forward with minimal oversight.
This is a great opportunity for someone eager to uncover real regulatory dynamics, challenge assumptions, and contribute to AI policy debates with concrete, well-researched insights.
-
This up to the FIG Fellow, but the more detailed their description of output and how they'll achieve it, the better.
Rob Trager
Senior Professor,
Oxford Martin School
Nick Caputo
Legal Researcher,
Oxford Martin School
Convenings On The Governance Of Open Source AI
You will be working with Nick Caputo from the Oxford Martin AI Governance Institute to organize an online and then in-person convening with the Berkman Klein Center at Harvard on open-source models.
This will require:
- Conducting supporting research and literature reviews on the current state of governance of open-source models, especially as it relates to AI.
- Writing clear and engaging pre-meeting memos that can facilitate conversations between participants.
- Summarizing the discussions and outcomes of the convenings and helping to coordinate follow-up actions and projects.
- Helping coordinate logistics as required.
-
Masters student or early to mid-career professional with policy or research experience.
We would also be interested in hearing from people with a (semi-) technical background and an understanding of the open-source AI landscape.
Experience conducting policy research/analysis and writing policy briefs preferably related to technical governance.
Available to join occasional meetings; any experience organizing or exposure to multi-stakeholder dialogues/events is a strong plus.
-
We expect this project will commence end-Feb and continue for at least 12 weeks with 1-2 convening during this time period - it is likely that if the convenings are a success they will become a regular occurrence.
A Research Agenda For Gold Standard AI Risk Management
The goal of this project is to develop a research agenda towards a gold standard AI risk management framework.
The agenda will survey the existing risk management landscape, identify open problems, and provide example projects for different stakeholders to work on.
The Fellow(s) would support this project by:
- Conducting background research and literature reviews on existing work related to risk management.
- Surveying experts to identify open questions and example projects.
- Assisting in breaking down the fairly complex risk management space into clear visual graphics.
-
Masters student (or above) or an early to mid career professional with first hand experience in technology standards or risk management.
Ability to break down complex and broad problems into their components.
Prepared to conduct deep research into existing literature and related standards.
-
We have a high level outline of the paper and an early stage example section. The Fellow(s) will thus be able to dive in immediately and make contributions to building out further sections. We expect to circulate the first sections for early external feedback by March and publish in April.
Implications Of Advanced AI On International Security
During this project, we will organise two events, one online and one in person, convening scholars of security studies to discuss the implications of advanced AI on international security.
The convening will not only aim to educate scholars on risks from advanced AI but to work collaboratively to identify areas for further research and collaboration.
Fellow(s) will support this project by:
- Conducting background research on the implications of advanced AI on international security, especially to map out potential scenarios.
- Summarising the outcomes of the convening and coordinating and supporting potential follow-up actions and projects.
- Supporting the logistics of the convening as required.
-
Advanced undergraduate degree or better.
Academic or professional experience in international/national security, international relations or a similar field of study.
Experience with scenario planning is a plus.
Experience designing workshops/events is a plus.
Writing & Journalism
Concise and cutting analyses to guide the decision-making of DeepMind, Longview Philanthropy, and the AI safety community.
Séb Krier
Policy Development & Strategy,
Google DeepMind
Writing In-depth AI Policy Memos For Google DeepMind
You’ll research and write in-depth memos (2-4 pages) on AI policy topics, analysing and summarising existing research to answer specific questions from the project lead. Memos will typically be due 2-7 days after being requested (sometimes longer) and should typically require minimal editing.
Example topics include:
- Overview of AI procurement regulations in a specific jurisdiction
- Benchmarks used by major evaluation firms for AI models and trends in recent months
- Analysis of recent developments on a particular theme, technical area, or policy topic
- Comparisons of different Responsible Scaling Policies and associated documentation
- Market reactions to new AI model releases
-
Strong research and analytical skills, capable of quickly synthesising information from multiple sources (including technical) into clear, well-structured memos.
Excellent writing ability, attention to detail, analytical skills, and the ability to distill complex AI policy topics to answer questions concisely.
Comfortable working independently under deadlines (typically 2-7 days per memo) and producing high-quality work with minimal revisions.
Familiarity with AI policy, regulation, or market trends is a strong plus, but strong general research skills and the ability to learn quickly are essential.
Ideally, you can commit 5-10 hours spread across the weekdays, and you can respond flexibly to urgent memo requests during GMT business hours for turnaround in 2-7 days.
We expected to select multiple FIG Fellows for this project, spread across areas of expertise and possibly multiple time zones.
-
Expect to write about two memos per week.
This will be a paid opportunity, with remuneration from Google DeepMind, and you’ll also need to have full working rights from where you’re based. You’ll also still enjoy all the benefits of joining the FIG Fellowship!
Suryansh Mehta
Co-Founder / Research & Communications,
FIG / Longview Philanthropy
Writing Donor-Facing Memos On AI Policy & Grantmaking
This project seeks skilled writers to create compelling, concise documents for high-net-worth donors, translating grantmakers' research and key global developments into engaging formats.
Key Responsibilities:
- Grantmaking Memos – Convert internal grantmakers' analyses into 1-2 page summaries tailored for donors, ensuring clarity and persuasion.
- Policy & Tech Updates – Summarize recent developments (e.g., legislation, AI governance shifts) in clear, digestible language for donor engagement.
- Grantee Reports – Contribute to the writing, copyediting, and/or proofreading of regular reports on our grantees' activities.
Expected Output:
- Short, polished, confidential donor-facing memos on grantmaking opportunities and AI policy updates.
- Precise, error-free content suited to a high-stakes audience.
-
Strong writing skills—able to convey complex ideas concisely and persuasively.
Ability to tailor messaging to different audiences, especially high-profile individuals.
Detail-oriented, ensuring zero errors in high-profile communications.
Familiarity with AI policy, governance, and related fields.
Professional, reliable, and able to work to deadlines independently without close supervision.
-
Commitment: 5-10 hours per week (higher availability preferred).
Duration: Minimum 12 weeks, with the potential to continue indefinitely. Potential for full-time roles if there’s a strong fit.
Work Model: Paid hourly for projects as work arises.
Confidentiality & Recognition: Work will not be public, but participants can list it on their CVs. Reference letters available.
Compensation: £30-50 per hour (all time worked will be paid).
Dan Hendrycks
Executive Director,
Center for AI Safety
Long-form Analysis And Journalism On Various AI Topics
We're setting up a new articles platform and looking for someone interested in developing articles and essays on various topics in AI safety and related to society. Articles will be 1000-2000 words long. It requires a similar interest and skill set as journalism, though we're looking for someone interested and able to get much deeper and into the details than a normal journalist. As a participant, you'll have the opportunity to leverage our network of experts and do various deep dives into topics in AI safety.
-
Ability to write in a magazine style (e.g., narrative writing similar to Lawfare, ability to make effective arguments).
Ability to write quickly and hit deadlines.
Excitement to learn about many aspects of AI safety.
(Optional) Knowledge of technical AI or of DC/Policy.
-
Looking for people who can commit to writing one 1000-2000 word article per month, and ideally once every two weeks.
We expect this to take 20+ hours per article.
Miscellaneous
Exploring wider questions including conducting metascience research for AI safety R&D and exploring the governance of agentic AI through case studies from finance.
Mauricio B
Technology & Security Policy Fellow,
DC Think Tank / Ex-OpenAI Contractor
Metascience For AI Safety And Governance
How can we effectively make progress on the many R&D challenges involved in AI safety and governance? These challenges include R&D for technical AI safety, security, evals, and verification. There has been significant academic study of R&D progress itself (i.e. metascience), but these insights don’t yet seem to have been applied to AI safety and governance. In this project, a team will review metascience research to develop recommendations for how policymakers and private funders can effectively advance R&D on AI safety and governance. For example, how do different funding structures, such as ARPAs, NSF grants, and advance market commitments compare? We’ll aim to produce a blog post.
-
No hard requirements. Bonus points for research experience, AI safety and governance knowledge, writing and analytical reasoning skills, and metascience experience.
-
12 weeks (with potential for extension given mutual agreement). Blog post. Maybe also presenting/discussing findings with researchers.
Lewis Hammond
Co-Director,
Cooperative AI Foundation
Lessons From Algorithmic Trading For Agent Governance
Financial markets are more-or-less the only place where we currently see complex autonomous agents (in the form of trading algorithms) interacting with – and adapting to – each other in high-stakes scenarios. They are also highly regulated, in an attempt to avoid outcomes ranging from collusion to flash crashes. This project will take a deep dive into the technical and regulatory mechanisms used to monitor and stabilise algorithmic trading in financial markets, and distill key lessons for the governance of advanced AI agents.
The aim will be to produce a short report that would form the basis for one or more of: a paper (to be submitted to a relevant academic venue); a blog post (for a wider audience); a briefing for policymakers and/or AI governance researchers. I will be able to connect fellows to experts in agent governance and the use of trading algorithms in financial markets, and also provide general research guidance (especially when it comes to AI safety/governance), but I am not an expert on finance, law, or economics, so fellows must have a relevant background in this regard.
-
Ideally postgraduate or above (talented late-stage undergraduates are also welcome to apply).
Background in at least one of: economics, finance, law.
Relatively autonomous and good at time management.
Suryansh, a FIG co-founder, presenting his research at the Spring 2024 Research Residency.