“When it comes to AI or any other technological advancement, there’s a lot of skepticism and a lot of fear,” said Don Simmons, Assistant Professor of the School of Library and Information Science (SLIS), in his presentation, “AI and Libraries: Leading the Technological Shift” on November 15, 2024. To situate the advent of AI historically, Simmons recounted how society has grappled with the impact of new technologies over the last several decades, from television and video games to computers and wifi internet connections.
“People really thought that computers were going to be an existential threat to automation workers and data analytic workers,” said Simmons. “They thought that this was not only going to replace humans in regards to work, but also humans in general.” He noted that, for those with the ability to use digital technologies, computers have amplified the job market, creating more opportunities in a variety of industries.
While Simmons has stated that he is not an AI expert, his research focuses on AI’s implications for the Library and Information Science (LIS) profession. He also explores how AI can be used productively, ethically, and safely, while researching ways for LIS students to develop generative AI literacy and amplify their professional practice.
The debate surrounding the use of AI in LIS has surged since the emergence of ChatGPT [a generative artificial intelligence chatbot launched in 2022 by OpenAI].
“I think that within our profession we have a lot of opposing views on whether we should embrace generative AI, figure out what the guardrails should be, or reject it totally,” said Simmons. “A healthy discourse can really open our perspectives from the other side of the table, and whether we should incorporate this within our systems or not.”
Deep Fakes and Misinformation
Simmons noted the importance of identifying “deep fake” images; sharing, as an example, an image of Pope Francis wearing a Balenciaga coat.
“A deep fake…is an image that isn’t real, but it looks real. It’s AI generated…by DALL-E or Midjourney [generative AI systems that create images from text prompts]. It’s a game changer, because it makes it easy for people without any technical skill to create a deep fake image like this, and with such realness it really looks like he's actually wearing a Balenciaga coat,” he said.
There are numerous ethical concerns about deep fakes, especially when used to intentionally mislead, with sociopolitical implications.
“Generative AI…can influence people’s opinions,” said Simmons. “It's all about personalizing the content, which people can use nefariously, unleashing misleading content, which can be a big issue within our current landscape. And this also leads to bias and discrimination, whether we're using it within generative AI systems or seeing it [show up] within big search engines like Google.” He also briefly mentioned other valid concerns, such as privacy risks, copyright issues, surveillance ethics, and how AI negatively impacts the environment.
In addition to intentional misinformation, generative AI is susceptible to “hallucinations,” which, in this context, refers to software that generates false or misleading information.
“The system is predicting language patterns rather than verifying facts,” said Simmons, “and when it gives you information, especially if it's not accurate to the prompts that you gave them, it will give you some form of misinformation, which can then mislead the user.”
Risks and Rewards of Generative AI
Generative AI uses pre-existing content from the digital landscape as the basis to create new text, images, sounds, or even video.
“It’s generating new information that looks similar to what it has learned before, which is the reason why it can create all of this content without us,” said Simmons.
ChatGPT is an example of generative AI that creates new content based off of a large amount of data that it accesses from the internet. “It [ChatGPT] raises questions about who owns the rights to that AI-generated content, and whether it infringes upon the original creator's copyrights,” he said.
On the positive side, there are AI programs that can offer accommodations to people with disabilities, such as transcriptions of a sound recording.
“It's not just generative AI that exists. There are different types of softwares…even though the concerns and risks are valid, the benefits [can be incredible,” said Simmons.
It's important to understand how learning about the tools connected to generative AI can help a wide range of people, especially those from marginalized and disadvantaged communities. Simmons also emphasized that we can develop our generative AI skills while also identifying and learning about the associated risks and ethical issues; because generative AI is “here to stay, and will keep evolving and become more common as time goes on.”
Simmons noted: “Anything under the AI umbrella is always evolving every single day, but we should be aware of this forever changing period of the digital landscape, and of AI itself.”
The Simmons University Library offers an Introduction to Generative AI on YouTube. In addition, two recent articles from MIT News consider Generative AI’s environmental impact and what consumers can do to mitigate that impact.