Our Fellowship
You can submit an expression of interest to our part-time, remote-first, 12-week fellowship here.
Our flagship program offers applicants the chance to work as research associates on specific projects, supervised by experienced leads. Associates dedicate 8+ hours per week to crucial topics in AI governance, technical AI safety, and digital sentience; gaining valuable research experience and building lasting professional networks.
FIG provides ongoing support, including co-working sessions, issue troubleshooting, and career guidance. The program features opening and closing events, networking opportunities, research sprints, and guest speakers from key cause areas.
Elliott Thornley, a FIG project lead, watches a presentation at our Spring 2024 Research Residency.
Fellowship Projects
AI Policy
AI Policy projects conduct robust, multidisciplinary research to inform governments’ responses to developments in AI.
In the next few months, we will work on:
Policy & Governance: projects in shaping rules, standards and institutions around AI on a national and international scale across private and public sectors.
Economy, Ethics & Society: projects on managing the effects of AI on economies, societies and power structures.
Philosophy for Safe AI
Philosophy for Safe AI projects use tools and concepts from academic philosophy to inform our approach to advanced AI.
In the next few months, we will work on:
Technical AI Safety: projects in LLM reward-seeking behaviour, definitions of cooperative artificial intelligence, and LLM interpretability.
Philosophical Fundamentals of AI Safety: projects in conceptual approaches to coexistence with advanced AI, and how AI agents make decisions under uncertainty.
AI Sentience
AI Sentience projects combine research and philosophy to investigate ethical theories and consciousness models including the welfare of artificial agents.
In the next few months, we will work on:
Governance of AI Sentience: projects in research ethics and best practices for AI welfare, constructing reliable welfare evaluations, and more.
Foundational AI Sentience Research: projects in models of consciousness, eliciting preferences from LLMs, individuating digital minds and evaluating normative competence.