AI Policy projects conduct robust, multidisciplinary research to inform governments’ responses to AI.

FIG helps you build career capital. You can spend 8+ hours a week working on foundational philosophical issues that can improve technical AI safety and mitigate catastrophic risks.

Our project leads are looking for postgraduate students across multiple fields (including computer science and philosophy), people with experience in machine learning, decision and game theory specialists, and well-read generalists with a track record of high-quality written work.

Scroll down to learn more. On this page, we list our focus area, project leads, and open projects.

Applications for the Winter 2025 FIG Fellowship are now open!

Apply here by Midnight (Anywhere on Earth) Sunday 19th October!

Focus Areas

In the next few months, we will work on:

Policy & Governance: projects in shaping rules, standards and institutions around AI on a national and international scale.

Economy, Ethics & Society: projects on managing the effects of AI on economies, societies and power structures.

Project Leads

    • Constellation

      • Astra Fellowship

    • Robert Trager

      • Mitigating AI-Driven Power Concentration

    • Lewis Hammond

      • Lessons From Internet Protocols For Agent Governance

    • Liam Patell

      • Models of Great AI Power Competition

      • Researching AI implications for conflict risks and international relations

    • Miro Pluckebaum

      • Developing an AI Safety Agenda for Singapore

    • Nick Caputo

      • Researching AI and Institutions

      • Legal Alignment for Safe AI

    • Marta Ziosi

      • Open Problems in Frontier AI Risk Management

    • Jonathan Birch

      • Mapping AI impacts on wild, domestic and farmed animals

      • Toward a Code of Practice for AI and non-human animals

    • Risto Uuk

      • A Taxonomy of Systemic Risks from General-Purpose AI

      • Effective Mitigations for Systemic Risks from AI

    • Richard Mallah

      • Quantitative Modeling for AI Risk Pathways

      • Credible Evaluation Standards for Adversarial AI

    • Deric Cheng

      • Designing Policy Interventions For A Post-AGI Economy

    • Andrew Sutton

      • Open Questions paper on AI x Finance

Policy and Governance

Projects in shaping rules, standards and institutions around AI on a national and international scale.

Constellation

Constellation: Astra Fellowship

Alongside a great selection of FIG projects, you can also apply to be considered by a variety of project leads from Constellation, as part of their upcoming Astra Fellowship, starting January 2026. Astra is a fully-funded, 3-6 month, in-person program at Constellation’s Berkeley research center in the Bay Area. Fellows advance frontier AI safety projects with guidance from expert mentors and dedicated research management and career support from Constellation’s team.

  • FWe’re looking for talented people that are excited to pursue new ideas and projects that advance safe AI. You may be a strong fit if you:

    • Are motivated to reduce catastrophic risks from advanced AI

    • Bring technical or domain-specific experience relevant to the focus areas (e.g., technical research, security, governance, policy, strategy, field-building)

    • Would like to transition into a full-time AI safety role or start your own AI safety focused organization

    Prior AI safety experience is not required. Many of our most impactful fellows entered from adjacent fields and quickly made significant contributions. If you're interested but not sure you meet every qualification, we’d still encourage you to apply.

Mitigating AI-Driven Power Concentration

The aim of this project is to examine AI-driven power concentration within a political context. By combining the technical dimensions of AI safety research with elements of political theory & science, this project will produce a comprehensive study of AI’s role in “autocratization”, democratic back-sliding, and democratisation.

Research Questions

  1. How are political regimes and economic elites currently using AI to centralize power?

  2. Which AI systems pose the highest risk and which regime types are most vulnerable?

  3. How does AI-driven power concentration erode democratic norms and institutions?

  4. What policies can mitigate these risks across contexts?

Outputs

  1. A structured assessment framework for analyzing AI in authoritarian settings.

  2. A catalog of high-risk use cases impacting democracies.

  3. A comparative policy analysis of interventions and best practices.

Fellow Contributions

  1. Conduct literature reviews, draft sections of papers and develop case studies.

  2. Support comparative policy analysis by engaging in research at the intersection of AI safety, democracy, and governance.

  3. Generate evidence-based interventions with real-world policy impact.

    • Academic Background: Advanced undergraduate or graduate student in machine learning, computer science, political science, international relations, public policy, law, or related fields, with interdisciplinary backgrounds particularly welcomed.

    • Core Knowledge: Familiarity with AI governance debates and strong understanding of political institutions and technology's interaction with governance; technical AI knowledge beneficial but not essential.

    • Research Capabilities: Excellent analytical and writing skills with proven ability to conduct literature reviews, synthesise findings, and produce structured, evidence-based research and policy insights.

    • Experience Required: Prior research experience (academic or policy-focused) essential, with experience in report writing, policy briefs, or academic papers highly valued; international organisation engagement desirable but not mandatory.

    • Working Style & Values: Self-directed researcher comfortable with minimal supervision whilst contributing to collaborative teams, genuinely concerned about democratic challenges and motivated to contribute meaningfully to this research field.

Lewis Hammond

Co-Director, Cooperative AI Foundation

Lessons From Internet Protocols For Agent Governance

Earlier on in the days of networked computing, people spent a lot of time thinking about (and even fighting 'wars' over) what protocols would be most robust. Now, we are seeing the emergence of early protocols for interactions between advanced AI agents, such as MCP. This project would seek to understand what lessons can be learnt from these earlier developments, with a view to improving the robustness and security of networks of advanced AI agents. Depending on the skill set of the mentee(s), this could include both the technical features of the protocols and the legal and political context surround their development and adoption.

    • Ideally postgraduate or above (talented late-stage undergraduates are also welcome to apply)

    • Some basic AI/CS background is required (enough to understand internet protocols and MCP) and an interest or background in history, politics, and/or law is a plus

    • Relatively autonomous and good at time management

  • Ideally this will end up as a short technical report, with a briefing and/or blog post distilled from that.

Models of Great Power AI Competition

This project will attempt to model bipolar US-China AI competition to attempt to understand the risk of conflict (inspired in part by section 6.3 of Allan Dafoe's 2018 GovAI research agenda).

Previous work has claimed that the AI race is a prestige race, or innovation race, rather than an arms race. This claim is plausibly true -- and yet, as a dual-use technology, AI also threatens to create a security dilemma. This project would investigate the idea that there are multiple stable regimes of bipolar great power competition: virtuous competition (innovation-based attraction) and traditional security competition (force-based coercion).

  1. Under what conditions do bipolar great power competitions converge to these two definitions?
  2. What factors determine transitions between them?

You'd largely have ownership over the direction of the project -- including synthesizing historical analysis and building new models of competition.

Liam Patell

Research Scholar, GovAI

  • Familiarity with US AI policy -- especially US national security policy levers -- is ideal, but please err on the side of applying if you're not sure. Background in the following areas may be helpful, but is not necessary:

    • IR, war studies;

    • conflict studies;

    • security studies.

    Excellent work may require fluency with the AGI strategy landscape and possible trajectories of AI progress.

  • Aiming for an ArXiv pre-print after 3 months - 6. Shorter, intermediate outputs may be possible.

Researching AI implications for conflict risks and international relations

Advanced AI could reshape the dynamics of conflict and cooperation at multiple levels. On the one hand, non-state actors already deploy commercial drones, and access to more powerful AI could amplify their reach, precision, and lethality — raising risks of escalation into civil wars, proxy wars, or even great-power confrontation. On the other hand, frontier AI might open new avenues to strengthen peace: improving conflict prediction, empowering humanitarian operations, and reinforcing institutions that reduce violence.

This project will:

  • Map how different actors — from non-state groups to major powers — might integrate AI into conflict or peace-building efforts.
  • Catalogue and analyse incidents where AI or drones are used in conflict, building an early evidence base.
  • Model scenarios where AI parity between great powers alters the balance of power and incentives for cooperation or confrontation.
  • Evaluate strategic levers governments can pursue today to improve competitiveness while safeguarding peace.
  • Assess whether AI ultimately heightens risks of war or creates opportunities to accelerate the mechanisms of peace.

See here for more information.

  • Familiarity with US AI policy -- especially US national security policy levers -- is ideal, but please err on the side of applying if you're not sure. Background in the following areas may be helpful, but is not necessary:

    • IR, war studies;

    • conflict studies;

    • security studies.

    Excellent work may require fluency with the AGI strategy landscape and possible trajectories of AI progress.

  • Aiming for an ArXiv pre-print after 3 months - 6. Shorter, intermediate outputs may be possible.

Developing an AI Safety Agenda for Singapore

Singapore is emerging as an important actor in the AI governance landscape hosting international convenings between East and West, building-out one of the world's leading AI Safety Institutes and facilitating the creation of a commercial AI assurance ecosystem.

  • This project aims to identify which additional concrete AI Safety projects across technical & governance research, ecosystem building, domestic policy and diplomacy the Singaporean ecosystem is particularly well placed to execute on.
  • Doing so will require a combination of research and stakeholder interviews.
  • Findings will be communicated in a public report as well as memos for stakeholders in civil service, academia and industry.
  • If time remains (or as a follow-up project) we will aim to execute on one of the identified projects.

Miro Pluckebaum

Strategy and Research Manager, Oxford Martin AI Governance Initiative

  • We are looking for an experienced project lead who can operate with significant autonomy. As such you should have existing experience driving research, policy memos or other relevant projects e.g. research workshops/ convening. Past experience working on projects related to AI or experience of the region is preferable but not required. Successful applicants might be PhD students or mid-career professionals in policy or technology companies. We may be able to source research assistants to support you in the project.

    You will need: Excellent stakeholder management and communication skills, strong writing and research skills, project management experience.

Nick Caputo

Researcher, Oxford Martin AI Governance Initiative

Researching AI and Institutions

Institutions are a key technology for social coordination and effective action in a complex world. Advanced AIs will transform how existing institutions can operate and also make new institutional forms possible. This project will explore how past institutions arose to deal with new technologies and consider how AI might require and engender new institutions across fields from governance, to business, to social interactions. We will aim to produce several research papers and memos.

  • Master’s, PhD, JD or similar degree or early to mid-career professional with research experience; background in law, economics, politics, history, business, or similar domains preferred; ideally prepared to lead substantive mostly-independent research project.

  • Aim to produce at least one significant research paper for early next year. For one existing project I'm working on, I won't be able to provide coauthor credit, but for others should be able to.

Legal Alignment for Safe AI

Getting alignment right is probably one of the keys to a good AI future. The law might provide useful lessons for how to do alignment, from offering a robust and battle-tested set of rules to which to align AIs to containing useful lessons on how to constrain the reasoning and decision-making of powerful actors. This project will seek to develop the field of legal alignment and resolve key problems within it, evaluating whether and how the law can be used to align AI. We will aim to produce research papers for the AI governance and legal communities and possibly work on technical outputs like legal alignment evals.

  • Lawyer, law student, legal researcher, or law professor with research experience. Alternatively, technical researcher with background in evals, alignment, RL, or similar.

  • We'll aim to produce law review articles or arXiv preprints on some key legal alignment questions. These will be coauthored.

Problems in Frontier AI Risk Management

This project investigates a central question: What are the most pressing open problems in frontier AI risk management, and what approaches could effectively address them? Although leading AI developers have announced commitments to safety, there remains little clarity on what robust and operationalized practices should entail. Existing initiatives, such as the Intl. Scientific Report on the Safety of Advanced AI, focus on consolidating consensus, but less effort has gone into systematically identifying gaps and unresolved problems. Without this mapping, the field risks bottlenecks toward shared norms and standards for safe AI.

The aim of this project is to (1) map the landscape of risk management practices for frontier AI, (2) highlight gaps where practices are underdeveloped, and (3) propose candidate solutions that could guide pre-standards work and consensus. Examples include frameworks for incorporating societal values into risk criteria, methods for managing internal deployment risks, and approaches for assessing the effectiveness of mitigations.

As part of this effort, the FIG Fellow will focus on a well-scoped subset of the project. Depending on their interests, this may involve helping to sketch and categorize gaps in a particular area of risk management, or developing proposals around an identified gap.

Marta Ziosi

Postdoctoral Researcher, Oxford Martin AI Governance Initiative

  • We are looking for a master’s or PhD-level candidate with a solid understanding of risk management. Technical expertise would be a strong asset (though not absolutely required), and prior policy experience would also be advantageous. The ideal candidate is able to work independently with minimal supervision, demonstrates strong initiative, and is comfortable taking a proactive approach to advancing project goals.


  • Minimum length for the project output will be three months with the possibility to continue the research relationship depending on how the project goes. The ideal output will be either a contribution to a bigger paper or the co-authoring of a paper together depending on the interest of the FIG fellow.

Economy, Ethics & Society

Projects on managing the effects of AI on economies, societies and power structures.

Mapping AI impacts on wild, domestic and farmed animals

AI systems are rapidly shaping the lives of wild and companion animals (e.g. wildlife management drones, AI-driven pest control, pet surveillance). Many such impacts are underexplored and risk entrenching harms or missing opportunities for welfare improvements. This project would undertake a systematic review and field-mapping exercise of how AI technologies interact with animals across different domains, identifying both risks and opportunities for regulation and advocacy.

Example Work and Outputs

  • Building a taxonomy of animal-AI interactions, from driverless vehicles, to sensors in factory farms, to wildlife population management tools.
  • Reviewing technological startups (500+ already identified) and classifying their potential welfare effects.
  • Analysing case studies (e.g. automated farming, AI-assisted veterinary diagnostics) for both positive and negative welfare implications.
  • Exploring long-term trajectory impacts: could widespread AI-driven animal farming make factory farming harder or easier to dismantle?

Jonathan Birch

Professor, London School of Economics

  • The successful Fellow(s) could have a variety of backgrounds including philosophy, biology, environmental policy, governance and / or others. Fellow(s) will be expected to work flexibly under Jonathan's direction, in support of what his wider team is doing on this project.

Toward a Code of Practice for AI and non-human animals

AI is increasingly used in contexts that affect animals. Yet animals are almost completely absent from AI governance debates (e.g. the EU AI Act does not mention them once). This project would refine and expand on the principles outlined by Birch & Simoneau-Gilbert here, working towards a cross-sector code of practice for the ethical use of AI in relation to animals.

Example work and outputs:

  • Mapping the regulatory landscape (AI legislation at the national, state and other levels, national AI strategies, private codes of conduct) to highlight gaps in animal protections.
  • Drafting and testing candidate principles for ethical AI–animal interaction across contexts (wildlife, farmed, companion animals).
  • Preparing policy briefs and public-facing resources to build coalition support ahead of a 2027 global summit on AI & animals.
  • The successful Fellow(s) could have a variety of backgrounds including philosophy, biology, environmental policy, governance and / or others. Fellow(s) will be expected to work flexibly under Jonathan's direction, in support of what his wider team is doing on this project.

A Taxonomy of Systemic Risks from General-Purpose AI

This project aims to rework an existing pre-print and publish it in an academic journal. Here's the abstract of the preprint:

Through a systematic review of academic literature, we propose a taxonomy of systemic risks associated with artificial intelligence (AI), in particular general-purpose AI. Following the EU AI Act's definition, we consider systemic risks as large-scale threats that can affect entire societies or economies. Starting with an initial pool of 1,781 documents, we analyzed 86 selected papers to identify 13 categories of systemic risks and 50 contributing sources. Our findings reveal a complex landscape of potential threats, ranging from environmental harm and structural discrimination to governance failures and loss of control. Key sources of systemic risk emerge from knowledge gaps, challenges in recognizing harm, and the unpredictable trajectory of AI development. The taxonomy provides a snapshot of current academic literature on systemic risks. This paper contributes to AI safety research by providing a structured groundwork for understanding and addressing the potential large-scale negative societal impacts of general-purpose AI. The taxonomy can inform policymakers in risk prioritization and regulatory development.

Risto Uuk

Head of EU Policy and Research, Future of Life Institute

    • Strong background in conducting literature reviews or systematic literature reviews. Previous experience in taxonomy development is a plus.

    • Ability to carry out work independently with minimal instructions and oversight. 

    • Previous background and research experience related to AI risks is a plus.


  • With substantial contributions, the fellow can be a coauthor to the academic paper. The amount of time spent is negotiable, but in expectation 5-10 hours for at least 12 weeks. If the project gets finished earlier, there is other research to contribute to. Research tasks may be thematic analysis, taxonomy development, revising an existing paper.

Effective Mitigations for Systemic Risks from General-Purpose AI

This project aims to rework an existing pre-print and publish it in an academic journal. Here's the abstract of the preprint:

The systemic risks posed by general-purpose AI models are a growing concern, yet the effectiveness of mitigations remains underexplored. Previous research has proposed frameworks for risk mitigation, but has left gaps in our understanding of the perceived effectiveness of measures for mitigating systemic risks. Our study addresses this gap by evaluating how experts perceive different mitigations that aim to reduce the systemic risks of general-purpose AI models. We surveyed 76 experts whose expertise spans AI safety; critical infrastructure; democratic processes; chemical, biological, radiological, and nuclear risks (CBRN); and discrimination and bias. Among 27 mitigations identified through a literature review, we find that a broad range of risk mitigation measures are perceived as effective in reducing various systemic risks and technically feasible by domain experts. In particular, three mitigation measures stand out: safety incident reports and security information sharing, third-party pre-deployment model audits, and pre-deployment risk assessments. These measures show both the highest expert agreement ratings (>60%) across all four risk areas and are most frequently selected in experts’ preferred combinations of measures (>40%).

    • Strong background in quantitative and qualitative data analysis.

    • Ability to carry out work independently with minimal instructions and oversight.

    • Previous research experience in AI risk and sociotechnical mitigations is a plus.


  • With substantial contributions, the fellow can be a coauthor to the academic paper. The amount of time spent is negotiable, but in expectation 5-10 hours for at least 12 weeks. If the project gets finished earlier, there is other research to contribute to. Research tasks may be quantitative and qualitative data analysis, and revising an existing paper.

Quantitative Modeling for AI Risk Pathways

This project develops quantitative models of AI risk pathways, adapting probabilistic risk assessment methods for advanced systems while modeling offense-defense dynamics across different scales of potential control failures. The work addresses scenarios where conventional actuarial approaches fail due to AI's unprecedented challenges, moving beyond the narrow, well-defined threat models typical in traditional risk management. The modeling also aims to identify critical intervention points, quantify previously qualitative assessments, and inform technical governance with actionable metrics.

Richard Mallah

Principal AI Safety Strategist, Future of Life Institute

  • The ideal fellow possesses strong foundations in Bayesian networks, multivariate statistics/calculus, tensor analysis, ontological modeling, and knowledge graphs. Experience with infra-Bayesianism would be advantageous. This rather technical project requires specialized mathematical skills to effectively represent complex interactions between vulnerabilities, deployment contexts, and governance factors that could lead to uncontainable harm with varying degrees of severity and reversibility. This project is already in-progress, and the Fellow(s) would be working with internal and external teams on the output.

  • This project is already in-progress, and the Fellow(s) would be working with internal and external teams on the output (uncertain at this stage).

Credible Evaluation Standards for Adversarial AI

This project will develop the foundational principles for a new class of more scientifically rigorous advanced AI evaluation frameworks. This project focuses not on implementing evaluations, but on establishing clear desiderata, guidelines, and methodological constraints that would make future evaluation protocols both more scientifically defensible and effective at capturing emergent risks, even under skeptical scrutiny from parties approaching with adversarial mindsets. The fellow will critically analyze current evaluation approaches, identify specific weaknesses in their scientific underpinnings, and articulate principles for improvement, such as appropriate hypothesis formation, necessary control conditions, standards for reproducibility, and valid extrapolation boundaries. Their work will tackle the core tension between traditional scientific rigor and the need to anticipate capabilities and risks before they fully manifest, producing a conceptual framework that balances these competing demands while anticipating and addressing potential objections from methodological critics. This intellectual foundation will define how stakeholders might properly calibrate confidence in evaluation results, acknowledge uncertainty, and still generate actionable insights about novel capabilities and risks, including for those predisposed to dismiss AI risk concerns.

  • The ideal candidate will bring expertise in philosophy of science, experimental design methodology, and familiarity with AI evaluation landscapes, combined with strong analytical thinking and an ability to bridge technical and epistemological concerns while understanding the psychology of scientific credibility and resistance to AI risk findings. The Fellow(s) will lead this work independently, subject to guidance and feedback from Richard.

Designing Policy Interventions For A Post-AGI Economy

We are running a 12-week intensive research program for fellows. In this program, fellows will produce (in teams of 1-3) novel research to answer the question: What are economic policies and interventions that governments should adopt during the upcoming AI economic transition?

This sort of deep research is extremely neglected. By the end of this research program, we intend to collectively develop the first-ever collection of deep research articulating a vision for how governments should respond to the widespread impact of AI systems on the economy.

Topics of prioritization include: Public & Social Investments (infrastructural investments, social safety nets, redistribution), Global Governance (restructuring international organizations, tax coordination, dividend funds), and Wealth Capture Policies (taxation Strategies, equity stake mechanisms, restructuring AI ownership).

Deric Cheng

Director of Research, Windfall Trust

  • We're looking to work with people with a strong background in economics (e.g. a PhD or Masters) or public fiscal policy! Early to mid-career professionals who are trained economists, or have previous experience working in tax / welfare / education policy are highly encouraged to apply.

  • We would plan to publish a new paper co-authored with the FIG participants, or briefings as necessary. Plan is for 12 weeks, ~120 hours of work.

Open Questions paper on AI x Finance

We are writing a multi-author Open Questions paper on frontier AI and the financial system, and seeking one or more research assistants (or co-authors/expert contributors). The paper will draw on diverse perspectives (e.g. from AI research, finance, regulation, economics), to address:

  • How might AI cause aspects of the financial system to fail, and how can this be mitigated?
  • How can mechanisms, lessons or tools from finance help promote AI safety or governance?

An abstract and chapter outline can be found here. Typical research assistant tasks (we can tailor this to you):

  • Deliver specific, directed research tasks (e.g. create a timeline of relevant regulatory publications and classify the themes addressed)
  • Prepare rough section drafts, or re-edit existing work (e.g. draft a skeleton outline of a section on AI risk insurance)
  • Planning and project management.
A research assistant should be able to commit 5 to 10 hours/week, ideally through to end-March 2026. We also welcome applications from experts to be a core co-author (a significant time commitment, 15hrs+/week) or expert contributor (much more flexible/tailorable).

Andrew Sutton

AI Researcher

  • An ideal research assistant will bring at least three of:

    • Strong research and research-writing skills (clarifying complex things)

    • Experience editing others’ work

    • Some familiarity with finance (from study or work)

    • Some familiarity with AI capabilities, safety or governance 

    • Strong organisational skills (experience of project management an advantage)

    Potential core co-authors or expert contributors should have either (and ideally both): i) 8+ years experience in a relevant area, ii) experience writing for publication. For the core co-author role, being London-based is a plus. A core co-author will share responsibility for planning and writing the overall paper, while an expert contributor might contribute to one section or offer expert feedback.

  • The project will already be under way by the time FIG applicants join, and is projected to run until end-March 2026. Applicants who can stay on the project until the end are preferred.

    For a research assistant: 5-10 hrs/week commitment, with weekly check-ins.
    For an expert contributor: Will depend on your circumstances. This might involve outlining a chapter over the course of 1-2+ months, or just reviewing/commenting on drafts.
    For a core co-author: Several days per week. This person would become a close collaborator.

Suryansh, a FIG co-founder, presenting his research at the Spring 2024 Research Residency.