Our Vision for FIG
Most opportunities to work on the world’s most pressing problems are full-time, in-person roles in just a handful of cities - and it can be difficult to choose which problems to prioritise. We believe there are a lot of talented would-be researchers out there with the desire to have a positive impact, so we’ve decided to make it easier for people to work on these problems part-time and remote-first.
Since starting in 2023, we’ve supported high-quality work on a wide range of topics, ranging from digital minds to global health. Now, we’re focusing on three priority cause areas that we believe could have an outsized effect on the long-term future.
AI Policy is where theory meets practice, and helps policymakers make difficult decisions. What policy settings protect the benefits and limit the risks of transformative AI? Is it better to compete or cooperate in building transformative AI, and how does this change if we think it could be dangerous?
Philosophy for Safe AI helps us build thorough ethical foundations for technical research into AI safety and its implications. What should we want, and how can we know? What should we do when we have limited information, little time, and uncertain outcomes? Can AI be conscious, and if so, what does that mean for our shared future?
AI Sentience explores questions surrounding machine consciousness and subjective experience in artificial systems. How do we detect or measure sentience in AI? What are the ethical implications if AI becomes genuinely conscious, and how should we treat potentially sentient systems? These questions become increasingly urgent as AI capabilities advance and the boundaries between intelligence and consciousness blur.
We’d encourage you to apply now or submit an expression of interest for future fellowships!
Suryansh Mehta & Luke Dawes
Future Impact Group
Yuqi Liang, a FIG participant, presents his work on global priorities.
Meet the Team
We’re proud to run FIG, and hope that we can help you change the world with your research career!
Luke Dawes
Managing Director
Before joining FIG, I taught AI governance to professionals from the UN, the EU, the UK government and NATO as a Teaching Fellow for BlueDot Impact. I supported research into the UK’s policy response to AI-driven risks at the Centre for Long-Term Resilience (CLTR). I also served as a diplomat at the Australian Embassy in Tehran, where I worked on political and development issues.
FIG runs the kind of programs I wish I’d known about earlier in my career, which is why I’m so excited to help found it. I want to help early- to mid-career folks contribute to high-impact research, whether they’re trying to accelerate their careers or pivot to new ones, and I want to support important work across AI governance and philosophy.
Suryansh Mehta
Co-founder & President
I pivoted from grantmaking research about cost-effective global health and development interventions to working on AI safety around the time that ChatGPT changed the world.
Since then, I've spent a couple of years working on a textbook about AI risks and collaborating with labs and think tanks in the UK & US to inform our approach to safely deploying advanced AI systems. Since my writing has reached over 100,000 people, I'd like to think that I've educated some people, changed some minds, and had some impact!
In 2023, I set up FIG to help others achieve what I set out to do: enter the field to work on the most pressing global issues, develop my skills and network to enable me to effectively contribute towards solving them, and have an outsized impact through my career.
Marta Krzeminska
Programme Operations Lead
A growth marketer and a startup ninja, at the end of 2022 I decided to pivot into AI safety. Combining my experience in operations, data analysis and marketing I hope to address what I think matters most: the transformative risks posed by advanced AI systems.
Before joining FIG, I supported operations and research into how governments can address AI-driven risks in the AI safety unit at the Centre for Long-Term Resilience (CLTR) and several organisations in the AI safety space, including facilitating BlueDot Impact’s AI governance courses.
FIG’s mission resonates with me, and I’m excited to support researchers and fellows in making a meaningful impact in AI governance and safety.