Managing risks of AI in libraries. Woman balancing on two stacks of books.

Managing the risks of AI in libraries

29 July 2024 • Jon Bentley, commercial director

AI can offer many benefits to research libraries and their users – but it’s important to be aware of the risks

Do the benefits outweigh the risks of artificial intelligence (AI) in libraries? This was one of the hot topics covered at our Access Lab 2024 event.

At the conference, we learned that AI has many benefits – but also limitations that librarians should understand.

“AI is not yet an actual intellect,” warned Dr Luba Pirgova-Morgan, research fellow at the University of Leeds, in her opening keynote.

Instead, the speaker said, it’s “a low-grade and sometimes broken mirror of those who engage with it. The more we know its limitations – and how to engage, interpret and verify – the more the mirror cleans up.”

"AI is not yet an actual intellect. It's a low-grade and sometimes broken mirror of those who engage with it. The more we know its limitations - and how to engage, interpret and verify - the more the mirror cleans up"

Dr Luba Pirgova-Morgan

With that in mind, here are some of the main risks of AI in libraries.

Errors and hallucinations

One important issue is the potential for error. So-called “hallucinations” are common, and manual checks are often needed.

Unreasonable error rates could undermine benefits, warned Access Lab panelist Matthew Weldon, of Technology from Sage.

For example, if an AI accessibility tool has a 10% error rate, that means that people who rely on it are accessing lower-quality information than their peers.

“I would argue that an error rate like that is not acceptable when it comes to making your teaching more accessible,” Weldon said.

What is an acceptable error rate, and in what context, is an ethical question for libraries to consider.

Biased data

Another risk is bias. An AI model is typically “trained” on data sets – but if underlying data is unbalanced or not diverse, biases may appear in results.

Bias in algorithms, Dr Pirgova-Morgan told Access Lab, could “perpetuate existing inequalities or even exclude certain groups”, while biased data could cause it to “inadvertently discriminate against specific demographics”.

A situation to avoid is where AI becomes a ‘black box’, making discriminatory decisions based on hidden assumptions.

Privacy versus personalization

A third ethical challenge is privacy. A possible benefit of AI, for example, is its potential to make personalization more useful. For example, it could remember user searches and use AI to predict the next step.

But researchers may object to this level of personalization. And both users and publishers may object to their data being used to train an AI model, whether for privacy reasons or for reasons of intellectual property.

The importance of transparency

Given the risks, what can be done to mitigate the impact of AI in libraries? The answer, perhaps, is transparency.

Transparent consent is, of course, a cornerstone of privacy risk mitigation. Bias, too, can be mitigated by being clear about underlying data sets – and involving humans in decision-making. Clear citation of sources, meanwhile, can mitigate the issue of AI-driven research errors.

At OpenAthens, we believe researchers should still be able to access proprietary content seamlessly, no matter what AI-driven discovery tools they use. Indeed, this will be vital as users make careful checks of underlying research.

Ultimately, AI is a creation of humans – and humans have the power to apply the guard rails to mitigate the risks.

Watch our Access Lab AI playlist to hear the full discussion.

[A full version of this article was published in Research Information on 11 July 2024.]