Skip to main content
for

Experts available on ChatGPT and AI language models

The emergence of Open AI's ChatGPT and other large-language artificial intelligence models will impact nearly every industry on the planet and the very way we approach questions, Northwestern University experts say.

Faculty are available to discuss many facets of this changing paradigm. Connect with them directly using the contact information below, or reach out to media relations for assistance.

Interview the experts on ChatGPT and artificial intelligence

Jeremy Birnholtz

Human/computer interaction

Professor in Communication Studies

Birnholtz’s research focuses on human-computer interaction issues such as attention, online identity, and collaboration through the use of technology. He runs the Social Media Lab and investigates how people develop “folk theories,” or simplified explanations of system function that guide their behavior. Even when technically incorrect, folk theories can be a valuable guide in navigating the complexities of AI. 

Nick Diakopoulos

Journalism, ethics and AI

Associate Professor in Communication Studies and Computer Science (by courtesy)

Diakopoulos' research is oriented around computational journalism, including projects on AI, automation and algorithms in news production and distribution. He also studies AI, ethics and society and can speak to algorithmic accountability, transparency and impact. He is the author of Automating the News: How Algorithms are Rewriting the Media from Harvard University Press, and he recently participated in a webinar about ChatGPT's impact on journalism and how the technology might change the field.

Dr. Catherine Gao

Using ChatGPT to write scientific research

Instructor of Medicine (Pulmonary and Critical Care)

In a study available as a preprint on bioRxiv and currently undergoing peer review, Gao and her team found that ChatGPT could successfully produce realistic and convincing scientific abstracts. Gao says this could be used in exciting ways but that additional discussion and exploration from the scientific community is needed to decide its acceptable and optimal use. (Listen to a Feinberg Breakthroughs podcast about their findings.) Gao also is interested in how this technology can be used responsibly in healthcare to assist physicians.

Jeremy Gilbert

Technology and journalism

Knight Professor in Digital Media Strategy

Gilbert explores the intersection of technology and media to understand how new tools and techniques, including content generated through artificial intelligence, will affect the creation, consumption and distribution of media.

 Headshort

Kris Hammond

AI natural language writing

Bill and Cathy Osborn Professor of Computer Science
Director, Center for Advancing Safety for Machine Intelligence
Director, Master of Science in Artificial Intelligence

As a pioneer in AI and generative language systems, Hammond co-founded Narrative Science, a tech startup that generates stories (told in natural language) from big data. He is an expert in AI safety and ethics, developing human capabilities into machines and integrating AI into all aspects of life. His lab works on AI projects for the judicial system, education and human health and safety. 

Kristi Holmes

AI and information literacy

Director, Galter Health Sciences Library; Professor, Preventive Medicine (Health and Biomedical Informatics); Chief of Knowledge Management, Institute for Augmented Intelligence in Medicine (i.AIM)

As the chief of Knowledge Management for Feinberg's Institute for Augmented Intelligence in Medicine (I.AIM), Holmes believes that, in addition to accessibility, it’s important to determine how ChatGPT will impact larger ecosystems complicated by issues related to information literacy.  

Mohammad Hosseini

Ethics of AI In research

Postdoctoral Scholar in the Department of Preventive Medicine and an Associate Editor of the Journal of Accountability in Research

Hosseini believes banning the use of large language models (LLMs) in research is controversial and unenforceable. He is the author of a recent editorial that suggests ethical guidelines for using LLMs in research, a preprint about using LLMs in scholarly peer reviews and an opinion piece about using LLMs in education.

Dr. Abel Kho

Ethical impact on medicine

Director of the Institute for Augmented Intelligence in Medicine (I.AIM) and the Institute for Public Health and Medicine (IPHAM)’s Center for Health Information Partnerships

Kho says he believes having discussions right now about ChatGPT can help inform new regulations that also ensure the tool remains both accessible and equitable. 

Dr. David Liebovitz

AI in clinical medicine

Co-Director, Institute for Augmented Intelligence in Medicine's Center for Medical Education in Data Science and Digital Health; Associate Vice Chair, Clinical Informatics; Associate Professor, General Internal Medicine, Health and Biomedical Informatics

Liebovitz has been teaching clinical informatics for several decades, incorporating new methods for education and applications of AI within clinical patient care. He has been a chief medical information officer at two organizations where he actively implemented AI in clinical medicine. Liebovitz has contributed to publications applying AI methods and data science to analysis of electronic health records data (a recent example is this August 2022 study in the New England Journal of Medicine). He also has served on conference planning committees and presented sessions related to application of AI to healthcare.

Daniel Linna

AI and the law

Director of Law and Technology Initiatives

Linna's teaching and research focus on innovation and technology, including computational law and artificial intelligence. He currently is experimenting with ChatGPT and other large-language models on a chatbot platform that aims to support tenants in rental housing.

Duri Long

Human/AI interaction

Assistant Professor in Communication Studies

Long is a human-centered AI researcher interested in issues surrounding AI literacy and human-AI interaction. She studies how humans interact and learn as a way of informing the design of public AI literacy interventions as well as the development of AI that can interact naturally and improvise creatively with people in complex social environments.

Yuan Luo

AI automation, bias and misinformation

Associate Professor of Preventive Medicine (Health and Biomedical Informatics)

As chief AI officer for the Northwestern Clinical and Translational Sciences (NUCATS) Institute and the Institute for Augmented Intelligence in Medicine (I.AIM) at Feinberg, Luo can discuss ChatGPT and other AI bots’ possible increased risk of spreading misinformation and promoting bias. But ChatGPT can also be used for good, Luo says, helping automate the writing process, which is the speed bottleneck in knowledge generation and dissemination. Ethical and practical gaps still need to be bridged, he says. 

 Headshort

V.S. Subrahmanian

Predictive AI models

Walter P. Murphy Professor of Computer Science
Buffett Faculty Fellow, Buffett Institute for Global Affairs

Subrahmanian is an expert on the intersection between AI and security. He develops AI models to forecast actions and influence outcomes. His AI models have been used to forecast terror attacks and terror network evolution, to reduce poaching, to identify bad actors on social media, to forecast systemic banking crises, to maximize airline profits, to predict if apps are malware or not and to analyze and identify deepfakes. 

Nina Wieda

AI in education

Assistant Professor of Instruction in the Chicago Field Studies Program

Wieda is trained in analyzing daily behaviors through the prism of values and ideas that affect them and can discuss how to use ChatGPT in the classroom, how to get students to engage critically with AI and why it’s best to thoughtfully embrace new technologies rather than rejecting them.