AI x Sentience (brief description)
FIG helps you build career capital. You can spend 5-10 hours a week working on foundational philosophical issues that can improve technical AI safety and mitigate catastrophic risks.
Our project leads are looking for postgraduate students across multiple fields (including computer science and philosophy), people with experience in machine learning, decision and game theory specialists, and well-read generalists with a track record of high-quality written work.
Scroll down to learn more. On this page, we list our focus area, project leads, and open projects.
Applications for the Winter 2025 FIG Fellowship are now open!
Apply here by ??? !
Focus Areas
In the next few months, we will work on:
Project Leads
Project Leads etc
Projects
AI x Sentience
One-line description
Janet Pauketat
Research Fellow, Sentience Institute
Research Fellow, Sentience Institute
Investigating Beliefs about AI Sentience
To understand human-AI interaction and its downstream consequences, such as support for regulatory policies and willingness to advocate for AI welfare, we need to better understand beliefs about AI. This project seeks to evaluate extreme beliefs about AI sentience such as the belief in “awakening” chatbots, as situated in social scientific and psychological theories of conspiracy thinking, delusion, and persuasion. This project entails reviewing psychological and human-computer interaction literature, designing a study, collecting and analyzing data, and writing a report.
-
The ideal candidate is comfortable independently reading, summarizing, and writing in social science. This person has experience using social scientific methods like surveys, text analysis, experiments, interviews, or focus groups at a Master's degree level or higher. Creative thinking is desirable, and it would be especially useful to have some background in human-computer interaction, or social, moral, or cognitive psychology.
Suryansh, a FIG co-founder, presenting his research at the Spring 2024 Research Residency.