Elizabeth Seger is a researcher at the Center for the Governance of AI (GovAI) in Oxford where she investigates the real and potential impacts of AI on the production, dissemination, and internalization of information in technologically advanced societies. The aim of Elizabeth's research is to identify specific "epistemic threats" posed by AI technologies, to determine whether those threats yield or exacerbate pathways to existential risks, and to appraise how those pathways might most effectively be intervened upon. Elizabeth's work is an extension of the Epistemic Security project she led with CSER in 2020 and with whom she continues to collaborate.
Elizabeth recently completed her PhD in Philosophy of Science at the University of Cambridge during which time she was a research assistant at the Leverhulme Centre for the Future of Intelligence (LCFI). Elizabeth’s PhD research investigated foundations for trust in user-AI relationships in analogue to trust in human lay-expert relationships.
Elizabeth also holds an MPhil in History and Philosophy of Science from the University of Cambridge and a BSc in Human Biology and Society from UCLA.
Resources
- The Epistemic Security report
Tackling threats to informed decision-making in democratic societies: Promoting epistemic security in a technologically-advanced world
Report by Elizabeth Seger, Shahar Avin, Gavin Pearson, Mark Briers, Seán Ó hÉigeartaigh, Helena Bacon - The Trustworthy AI report
Toward Trustworthy AI: Mechanisms for supporting verifiable claims
Report by M Brundage et al. incl. Haydn Belfield, Elizabeth Seger, Seán Ó hÉigeartaigh - In Defence of Principlism in AI Ethics and Governance
Paper by Elizabeth Seger in Philosophy & Technology