Oxford Group on AI Policy conducts robust, multidisciplinary research to inform governments’ responses to AI.
FIG helps you build career capital. You could spend 5-10 hours a week working on concrete policy proposals that help governments respond effectively to the political, social and economic challenges of AI.
Our project leads are looking for a range of research associates: from social science and STEM students to mid-career policymakers and enthusiastic, well-read generalists with a track record of high-quality written work.
Scroll down to learn more. On this page, we list our focus area, project leads, and open projects. (Applications for the latest round have now closed. For more information, please contact us at info@futureimpact.group)
AI Policy: Focus Areas
In the next few months, we will work on:
Politics and IR. Coordinating international governance.
Social Science. Investigating AI’s socioeconomic effects.
Engineering and CS. Governance proposals using technical skills.
Each focus area has several project leads.
-
Saad Siddiqui & Isabella Duan (Safe AI Forum) have four projects on multilateral governance, with a focus on supporting the International Dialogues on AI Safety.
Robert Trager (Oxford Martin School) is working on three projects about information sharing, reporting regimes, and AI governance in China.
Read more below.
-
Julian Jacobs (DeepMind) is working on AI's Economic & Social Impacts.
Sam Manning (GovAI) is working on how to improve the labour market's resilience to changes driven by AI.
Oliver Ritchie (GovAI) is investigating automation and investment, and doing policy engagement in the UK.
Read more below.
-
METR is offering various projects related to evaluations of frontier AI systems.
Robert Trager (Oxford Martin School) is working on hardware verification mechanisms.
Mauricio B (DC Think Tank, ex-OpenAI) is investigating verification of international agreements and the stability of international governance.
Read more below.
International AI Governance
International AI Governance focuses on building global frameworks and cooperation to ensure responsible AI development, prevent misuse, and address transnational risks from advanced AI.
Saad Siddiqui
AI Policy Researcher
Safe AI Forum
Isabella Duan
AI Policy Researcher
Safe AI Forum
Project Descriptions
"Stable Plurality" - a proposal for a stable multilateral order with several leading AI powers
Most existing proposals for global AGI governance fall into 3 categories - one AGI project within American control, an AGI project led by the West, or a single global AGI project. This proposal would explore what a stable global governance system with more than one AGI project might require. It may also compare this proposal against other single-AGI project proposals, comparing them along axes of desirability, feasibility and robustness to different AGI development pathways. The output would most probably be a series of blog posts, a paper or report.
Case study of international standard-setting bodies with both US and Chinese involvement.
This project aims to consider understudied international standards-setting bodies with US and Chinese involvement that have influenced global technical standards (e.g., SPEC, OpenCompute Project). The goal is to develop case studies and write a blog post or paper summarising lessons that could be applied to AI standards-setting.
Turning consensus statements from the International Dialogues on AI Safety into concrete policy proposals.
At previous International Dialogues on AI Safety (IDAIS), prominent Chinese and Western scientists have agreed upon substantive policy goals to mitigate extreme risks from AI. For a specific jurisdiction or a policy audience, e.g., US, how do we turn specific policy goals to actionable policy? This project involves picking a specific jurisdiction, and writing policy memos/proposals to operationalise policy goals in that jurisdiction.
Designing a workshop for international scientists to deliberate on threat modelling for key AI risks.
This project involves hosting a pilot in-person workshop to deliberate on threat models, early warning thresholds, and evaluation tasks that signal when a red line has been crossed, as outlined in the IDAIS-Beijing statement. Responsibilities include scoping out workshop content, producing preparatory materials, and contributing to the editing of a white paper post-workshop.
Who we’re looking for
For each project respectively, Safe AI Forum is looking for applicants with:
Stable plurarity: Coursework or research experience in international relations. Experience conducting less well-scoped research.
Case study: At least one thesis/article worth of relevant research experience. Writing proficiency for academic and policy-oriented audiences is valuable.
Consensus statements: At least one relevant research experience, and experience in drafting policy memos and proposals. Knowledge of the legal and regulatory landscape in an important jurisdiction. Ability to translate high-level policy goals into actionable steps.
Workshop design: A degree in STEM with a good understanding of how deep learning works, and experience as an ML engineer or researcher.
Robert Trager
Co-Director
Oxford Martin School
Project Descriptions
Information sharing and international civilian AI governance
Frontier AI firms must balance the security of their model weights with data privacy and copyright laws, national security interests, and compliance with local and global regulations. Which actors should share which information, and with whom? This project will investigate information-sharing at the international level, and explore advocacy options for robust information-sharing frameworks.
Toward an international reporting regime for advanced AI
Domestic reporting regimes for frontier AI cannot detect distributed model training runs between compute providers in different jurisdictions, and some actors might exploit this to evade regulatory scrutiny. This project will investigate how leading AI jurisdictions (e.g. G7 countries, the EU, China) can develop a reporting regime for advanced AI that incentivises cross-border compliance.
AI Governance in China
AI governance in the People’s Republic of China will matter enormously for the future of frontier AI. AI safety discourse among Chinese policymakers is changing quickly, and the government is working quickly to design and enact auditing and technical governance regimes. This project will investigate the frontier AI governance landscape in China.
Who we’re looking for
For each project respectively, Robert is looking for applicants with:
Information sharing: Strong analytical and research skills in AI governance: Ability to analyse and synthesise complex policy, legal, and technical issues related to AI governance, including international regulations, compliance, and information-sharing frameworks.
Reporting regime: Expertise in international relations or global regulatory frameworks. Knowledge of how AI is governed across multiple jurisdictions, with a focus on balancing local and global regulations, national security concerns, and corporate interests.
China: Familiarity with Chinese AI governance and policy landscape: Understanding of China's rapidly evolving AI policies, regulatory priorities, and the broader geopolitical implications of AI development in China.
Socioeconomic Impacts of AI
Socioeconomic Impacts of AI examines how artificial intelligence reshapes labor markets, economic inequality, and societal structures.
Julian Jacobs
Researcher
Google DeepMind
Project Description
AI Economic Impacts and Social Consequences
The current research agenda investigates the impacts of AI on both the economy and society. Julian and his researchers will examine the potential for AI to drive economic growth, disrupt labor markets, increase inequalities, transform industries, and produce shifts in socio-political attitudes. They use a combination of both quantitative and qualitative methods to produce academic papers for publication, think tank reports, and short-form articles.
Who we’re looking for
Qualitative-focused applicants should showcase exceptional skills through coursework or writing samples.
Quantitive-focused applicants should have experience in coding, data analysis, and visualization.
Preferably, prior research experience in relevant fields (e.g., economics, sociology, technology) and familiarity with quantitative methods.
Strong communication, collaboration, and the ability to work independently.
An interest in long-term collaboration (e.g. 12 months), conditional on mutual fit.
Sam Manning
Senior Research Fellow
GovAI
Project Description
What can AGI companies do to increase societal resilience to the labour market impacts from AI?
This would be a memo (and, potentially, a paper) on what AGI Companies should do to increase preparedness for the labour automation impacts from increasingly advanced AI systems. It would aim to answer questions such as:
How do different capability levels in RSPs/preparedness plans map onto labour market impacts?
What kind of model evaluations are necessary to understand whether a new system will have large labour market impacts?
If these effects are detected, what should that trigger in terms of responsible deployment practices (public information sharing? Gradual roll out? Explicit calls for economic policy actions?).
When might mitigation (pacing deployment) be preferred to adaptation (policy to strengthen the adaptive capacity of the workforce)? What should companies be responsible for and what should governments be responsible for to manage labour market impacts?
Who we’re looking for
I’m looking for advanced undergraduate students as a baseline. An economics background, and an understanding of US economic policy, is a plus. Candidates should show a high level of attention to detail.
Oliver Ritchie
Research Scholar
GovAI
Project Description
Investigation of UK labour market and human impacts from AI automation: who is most likely to benefit, who is most at risk?
“AI driven automation” can feel like a bit of an abstract concept, and policymakers could engage more concretely with the ways it could reshape society. The project would dive deeper into Oliver’s earlier work, focusing on who was most likely to be impacted if AI is increasingly able to replace or augment human workers. The aim is to concisely explain what all that information means in practice with some punchy examples, rather than present all the detail. It would most likely be shared as a blog post, or perhaps an op-ed. Oliver expects the main sources of evidence would be academic papers, papers from consultancy firms, government publications, think tank reports, etc.
How likely is investment in frontier AI to slow down?
Many people expect the trend in AI for ever-larger training runs to continue, and base their expectations on what will happen next on that assumption. This project would identify the strongest arguments that investment in frontier systems might slow down, and evaluate the evidence for them (e.g. maybe it’s too hard to protect property rights on the largest models, so investors can’t make much profit). Oliver expects this would become an accessible explainer to share with policy generalists.
Contributions to reactive UK-focused policy notes
This would most likely involve extracting evidence or learning from cutting edge research and explaining how it informs a decision that policymakers are currently facing. It might be published (blog or op-ed style), or might be shared directly with policymakers in private.
Who we’re looking for
These projects could work well for early career professionals, or other people thinking of going down a non-academic route. These projects could also work well for academics interested in exploring a different style of working (e.g. greater focus on tying together broad evidence bases into practical suggestions, rather than advancing a narrow frontier of knowledge).
A personal interest in policy/politics is essential. Some understanding of economic theory is desirable, but this doesn't have to be from formal education.
Participants need to be really strong on at least one of these skills, and comfortable doing both:
clear, concise writing on nuanced topics
proactively finding evidence on a given question from both academic and non-academic sources (e.g. government publications), and taking a critical view on whether the evidence is convincing
Technical AI Governance
Technical AI Governance helps us improve the likelihood of safe and accountable AI through robust oversight, rigorous technical standards, and clear regulatory frameworks.
Model Evaluations and Threat Research (METR)
Various Project Leads
Project Description
New AI R&D Evaluations: ML Engineers Needed
METR is developing evaluations for AI R&D capabilities. Our goal is to provide an early warning before AI agents might dramatically improve themselves and kick off an ‘explosion’ of dangerous capabilities.
Why focus on risks posed by AI R&D capabilities? It’s hard to bound the risk from systems that can substantially improve themselves. For instance, AI systems that can automate AI engineering and research might start an explosion in AI capabilities – where new dangerous capabilities emerge far more quickly than humanity could respond with protective measures. We think it’s critical to have robust tests that predict if or when this might occur.
Applicants accepted to this project will help drive the development of these AI R&D evaluations forward. The exact project will be determined by METR and successful candidates.
Who we’re looking for
An ideal candidate would be a machine learning researcher with substantial experience working with frontier LLMs and a track record of successful execution-heavy research projects. Specifically, we're looking for people who:
Have a strong ML publication record (e.g. a few first-author papers accepted to top journals, workshops, or conferences), or
Have multiple years of experience solving challenging ML engineering or research problems.
Robert Trager
Co-Director
Oxford Martin School
Project Description
AI hardware verification mechanisms
Verifying how frontier AI chips are being used, and by whom, is a critical component of civilian AI governance. What kinds of agreements can governments sign on to, and what should their technical requirements be? What can compute providers learn from the signals they receive from their customers’ compute clusters, and how can this support verification? This project will investigate which mechanisms are feasible, and attractive to both regulators and developers.
Who we’re looking for
Robert is looking for applicants with:
Experience with regulatory frameworks and international agreements: Knowledge of global regulatory environments and the ability to evaluate the feasibility and attractiveness of hardware verification agreements for both governments and AI developers.
Technical expertise in AI hardware and compute infrastructure: Strong understanding of frontier AI chips, compute clusters, and the signals they generate, with the ability to assess their use in verification mechanisms.
Mauricio B.
DC Think Tank
ex-OpenAI
Contractor
Project Description
Verifying international agreements on AI
Verification of compliance could be crucial for international agreements on AI. Projects could include deep-dives into specific technical questions (e.g. in forecasting, ML, hardware), or qualitatively exploring near-term options for partial implementation. More concretely, some examples projects are literature reviews on relevant topics in computer security, data compilation and Fermi estimates on consumer hardware stocks, a literature review on effective R&D funding, and studying how data sovereignty and privacy laws interact with verification proposals. The project would involve a small group of participants, and aim toward an arXiv preprint or blog post publication.
Stable and beneficial international governance of AI
Geopolitical competition over AI may destabilise attempts at international coordination on AI, and it might independently escalate to conflict. Projects on this topic could help explore how international agreements on AI safety and benefit-sharing could be designed to be incentive-compatible, stable, and/or beneficial amidst geopolitical competition. Potential projects could involve game-theoretic modeling, historical case studies, ethical philosophy, or analysis of technical mechanisms for coordination. The project would be in a small team of mentees if possible and aim toward an arXiv preprint or blog post publication.
Who we’re looking for
For both projects, Mauricio is looking for someone who can commit to 10+ hours per week.
Participants should have strong analytical reasoning and writing skills, and familiarity with a relevant field, topic, or research method (many fields and topics are relevant, especially in computer science and international politics)
An applicant with at least some prior research experience and deep knowledge of a relevant area will likely be a strong candidate.
Suryansh, a FIG co-founder, presenting his research at the Spring 2024 Research Residency.