Work on impactful problems now.

Join a research group for a part-time fellowship to help you develop an impactful research career.

Spend 5-10 hours a week making a difference while getting valuable research experience working on the world's most pressing problems. We focus on global issues where research can potentially have an outsized impact, and where we can attain excellent mentors to support our fellows.

  • Work on specific projects. Apply to work on research led by project leads from a wide range of high-profile organisations, including Google DeepMind, GovAI and the Global Priorities Institute.

  • Boost your future impact. Add direct experience with well-known researchers and publications to your CV. You can develop high-value professional relationships, leading to future opportunities, and build other connections with our support.

  • Work remote-first and part-time. We want to find the research talent that other programs are missing in their recruitment, where location and time constraints might be a barrier to entry. We are happy to accept applications from a wide variety of people, including students, existing researchers, and early- to mid-career professionals looking to pivot into the field.

Our project leads are from organisations at the cutting edge of AI safety and governance.

Fellowship Projects

Philosophy for Safe AI

Philosophy for Safe AI projects use tools or concepts from academic philosophy to inform our approach to advanced AI.

In the next few months, we will work on:

Philosophical Fundamentals of AI Safety: projects in decision theory, AI macro-strategy, and conceptually guided experiments in machine learning.


AI Sentience: surveys of expert opinion, literature reviews, and applying insights from philosophy of mind to models of consciousness that could include artificial agents.

AI Policy

AI Policy projects conduct robust, multidisciplinary research to inform governments’ responses to AI.

In the next few months, we will work on:

Economics & Society: measuring the economic effects of advanced AI and proposing ways to manage them.

National & International Policy: proposing how the US executive branch should regulate AI, determining the effect of regulation on AI releases, and helping countries coordinate on managing AI risks.

Writing & Journalism: concise and cutting analyses to guide the decision-making of DeepMind, Longview Philanthropy, and the AI safety community.

Miscellaneous: exploring wider questions including conducting metascience research for AI safety R&D and exploring the governance of agentic AI through case studies from finance.

    • Recruit for project leads. We invite experienced researchers to become project leads and help them find research associates to work with.

    • Give research associates opportunities. We accept project-specific applications from a wide variety of early- to mid-career candidates to determine their fit for specific projects as part-time, remote-first research associates.

    • Provide continuous, impact-focused support. We check in regularly over twelve weeks, providing active support with building career capital for our research associates, accelerating impactful work for our project leads, and fostering strong professional networks for both.

  • We’re concerned by the risks posed by transformative AI systems. We believe there is important work to be done right now on AI Policy and Philosophy for Safe AI.

    We think there are talented people who can make a difference but can’t access full-time, in-person opportunities. So we help them do it part-time and remote-first.

    We’re proud of the work we’ve done so far, and we know there’s so much more to do.

    • Build your career capital to find an impactful role. Some of our previous participants now work at OpenAI, the UK’s Department for Science & Technology, the UN's Envoy on Technology, and the Centre for the Governance of AI.

    • Accelerate and apply your work. FIG projects are usually directly relevant to key decision-makers at important institutions. Research associates have increased the positive impact of ML academics at top universities, national governments and international organisations, AI companies such as DeepMind, and other policy and industry practitioners looking to mitigate the risks posed by AI. 

    • Enjoy the network effects of new professional relationships. Previous FIG research associates have started biosecurity research groups, and previous FIG project leads have expanded the AI governance community to include former policymakers who can offer new insights.

Supporting you through your journey to impact.

Contact

Join our opportunities mailing list and ask us any questions.

We send emails infrequently (5 in the last year).

Email
info@futureimpact.group

LinkedIn
www.linkedin.com/company/future-impact-group