Policy Talks 2024

A National Convening on Ethics, Narratives, and Artificial Intelligence
Friday, February 9, 2024

Research on Artificial Intelligence continues to advance at a rapid pace, with developments chased by a host of governments, regulatory agencies, and oversight bodies. No one is sure what the future will bring, either for the technology itself or the laws and policies that govern it. What is clear is the transformative power of AI for human experience.

Part of the trouble with prediction, regulation, and ethics in this area is that the term ‘Artificial Intelligence’ refers to a sprawling and diverse field of technology. It is not the case that every AI system is like every other. Task-respondent AI systems function differently than AI systems designed to simulate agents within teams or dynamic decision-making. Different goals yield different methods, functions, applications, and outcomes, and moral relevance and outcomes change when we consider various aspects of human-AI activity. How to design an AI system with goals such as trustworthiness in mind is one question; how doctors should make use of recommendations from AI is another. These questions differ and cannot be answered by the same moral concepts, methods, or practices.

To narrow the focus and frame our conversations, the topic for Policy Talks 2024 is “When Human Narratives Meet Artificial Intelligence: Responsible Design and User Protection.” Narratives encode and convey information among humans with an efficiency, memorability, and impact that mere propositions cannot. Narratives are powerful tools for communication and understanding, and AI systems that make use of narrative features and dynamic functions affect user psychology in powerful ways.

Our focus on narrative is not about students cheating on their creative writing papers using AI systems. It is about how some AI systems utilize narrative features and how they impact the psychology of human agents. We are familiar with algorithms on social media that can maintain and transmit narratives which strongly reinforce beliefs that desperately need to be questioned. Similarly, AI systems used in police training can support a narrative—operating below the level of a user’s conscious awareness—that has devastating consequences.

How are we to understand narratives and AI? What moral obligations do users and developers have to ensure its safe and ethical use? What guardrails ought to be in place for design and application? These are open questions that need to be answered. Policy Talks 2024 is our contribution to mapping out the moral landscape for AI, ethics, and narratives. We will examine existing policy proposals, analysis and ethical concepts from philosophy, methods and technologies from data and computer science, concepts of narrative and their features from narratology, and research from psychologists on the impact on human agents to explore answers. We will examine this underexplored area of human-AI interaction, ask hard questions, and work toward practical solutions to real-world problems.

Working Groups

Designers & Communicative Interaction

What assumptions are users and designers making about communicative interactions with AI systems?

Working groups are the heart of Policy Talks. They are comprised of the experts we have invited in order to gain a better understanding of the context, risks, and opportunities of our topic. Each person is assigned to a working group and joins in an intimate yet vigorous discussion about the topic with fellow experts and stakeholders. A notetaker* will keep track of the ideas the group covers. The notes and ideas generated in working groups provide us with the base materials for the initial report we send to all attendees. This report serves as our launching point as we dive deeper into our investigation and draft the white paper.

Users & Trust

How is trust in AI systems understood, gained, used, and abused?

Guidance & Guardrails

What guidelines should be adopted to ensure the ethical development and adoption of AI technology for organizations or institutions?

Education & Development

How can we train budding computer scientists and engineers to build AI systems with ethical constraints in mind?

Societal Values &
Public Impact

What changes will AI bring for workers, social institutions, disadvantaged groups, etc.?

Domain-Specific Insights

 Are there ways to help policymakers ensure that insights from diverse fields and industries (medicine, law enforcement, etc.) are consistently included?

Dr. Qin Zhu  Associate Professor, Engineering Education, Virginia Tech

Dr. Qin Zhu is an Associate Professor in Engineering Education and Affiliate Faculty in Science, Technology & Society, Philosophy, and the Center for Human-Computer Interaction at Virginia Tech. He is Associate Editor for Science and Engineering Ethics and Studies in Engineering Education and Editor for International Perspectives at the Online Ethics Center. Dr. Zhu serves on the Board of Directors for the Association for Practical and Professional Ethics (APPE) and the Executive Committee of the Society for Ethics Across the Curriculum (SEAC). His research explores cultural values in engineering education and the ethics of robotics and AI.

Speakers

Dr. Qin Zhu

Associate Professor, Engineering Education, Virginia Tech

Kellee Wicker

Director, Science and Technology Innovation Program, Wilson Center

Dr. Ori Freiman

Post-Graduate Fellow, Digital Society Lab, McMaster University

Dr. Nate Tenhundfeld  Chair and Associate Professor, Department of Psychology, University of Alabama, Huntsville

Dr. Nate Tenhundfeld

Chair and Associate Professor, Department of Psychology, University of Alabama, Huntsville

Dr. Nate Tenhundfeld is an Associate Professor and the Interim-Chair of the Department of Psychology at the University of Alabama in Huntsville. Nate received his PhD in 2017 from Colorado State University, and did a postdoc in the Warfighter Effectiveness Research Center at the US Air Force Academy, before joining faculty at UAH in 2019. His research focuses on human interactions with automation, AI, and robotics. To date he has published over 50 journal articles/conference proceedings, and has received $2.3 million in grant funding. In addition, Nate was awarded the 2022 UAH Undergraduate Research Mentor award, the 2018 Raja Parasuraman Award for Scientific Impact by the International Neuroergonomics Society, and the 2023 “Big of the Year” award by the Tennessee Valley chapter of Big Brothers Big Sisters of America.

Kellee Wicker  Director, Science and Technology Innovation Program, Wilson Center

Kellee Wicker leads the Science and Technology Innovation Program (STIP) at the Wilson Center, a Congressionally chartered think tank providing nonpartisan insights on global affairs. STIP conducts research on emerging technologies, including AI, cybersecurity, and space in the commercial age, offering insights to Congress, global policymakers, and the public. Beyond traditional research, STIP uses games and experiential learning to equip policymakers with the knowledge to craft smart legislation that safeguards individuals while fostering innovation. An Ole Miss alum, Kellee graduated from the Croft Institute for International Affairs and the Barksdale Honors College and earned master’s degrees in global public policy and Latin American studies from the University of Texas at Austin.

Dr. Ori Freiman  Post-Graduate Fellow, Digital Society Lab, McMaster University

Dr. Ori Freiman is a Post-Doctoral Fellow at McMaster University's Digital Society Lab and The Centre for International Governance Innovation's Digital Policy Hub. His research is at the intersection of emerging technologies, democracy, and societal change, focusing on AI policy and ethics, Central Bank Digital Currencies, and the topic of trust. Previously, he completed a Post-Doctoral Fellowship at the University of Toronto's Ethics of AI Lab. His academic education is rooted in Philosophy, Library & Information Science, and Science and Technology studies. Ori has authored several reports about technology and human rights, privacy, and democracy.