Ursinus Magazine

Burning Question Square Graphic

How can we live together…with AI?

Imagine checking out at a local produce stand and watching as the clerk totals your order on a desk calculator. Neither you nor the clerk seems concerned about the possibility of an error; that calculator is even more reliable than we are at performing these tasks.

Yet we have all experienced the effects of a “system error” at some point in our lives: an item you returned to a store without receiving a refund, or a credit card account that was declined or closed. It is difficult to speak with a person to learn why these occurred, and when we do, they are likely to say that their decision was correct because “the computer said so.” If a calculator produces the correct result every time, why can’t systems like artificial intelligence (AI) models do the same?

Unlike calculators, AI systems are designed to identify patterns in data, and to extrapolate new decisions based on the patterns it finds. Unfortunately, the historical data fed to these systems may contain implicit and explicit biases. If an AI is developed to make credit decisions for consumer credit cards, and the historical data it observes shows that women receive lower credit limits than men, it will infer that gender is an indicator of creditworthiness. The algorithms that underlie these systems promise only to identify relationships and to calculate what is likely to occur if that history is repeated. The risk of harm from using these systems stems from the data used to train the model that makes those decisions; due to historical biases likely to permeate that data, this risk disproportionately affects vulnerable populations.

Additionally, generative AI (GPT) has enabled generation of new content. Like machine learning systems, GPTs work by training on large samples of data. This, too, carries the potential for harm with respect to the input data and to the outcomes. The data may be copyrighted, and the output may bias toward the sources from which that data was obtained. GPTs aren’t guaranteed to output correct or even yield meaningful results, though they will confidently generate plausible text using its model. Many assume that the output is correct because it was “computed” to be, and then act on those outputs.

Broadly, state and federal governments are working to regulate the development and use of AI. The Federal “AI Bill of Rights” lists high level principles that should guide the deployment of such systems for consumer protection, including transparency about how automated systems use data about individuals, and how they maintain privacy. Pennsylvania and the European Union have each recently published potential guidance for the use of AI systems as well. These include ensuring that the output of generative AI systems be accurate and verifiable, as well as equitable and fair, and that the use of generative AI must be disclosed, along with any use of copyrighted data used to train the model. However, to borrow from the Ursinus Quest curriculum, the burning question should be, “How should we live together?” We tend to trust the computer and its underlying data (AI or not), because the computer indicates what we expect to be true. A human-centric practice would require that:

  • Diverse stakeholders are engaged to critically evaluate input data sets to train AI models.
  • Someone must transparently explain the rationale for the decision being made including a technical explanation of the AI internals.
  • Those affected by the decision enjoy a right to appeal to a governmental body on the basis of the input data used to arrive at that decision, and the potential disparity that results from it.
  • Mechanisms and personnel are employed to prevent the use of inequitable data to train computing systems.
  • Independent audits are regularly conducted on the input data and decisions generated by computing systems to detect and correct inequitable outcomes.
  • The use of AI systems and their training data are cited.

We are witnessing, in real-time, a public reckoning with the role technology should play in our lives. We must legislate to empower people—both users and stakeholders of AI—to understand the way that these systems are trained, and to understand the limitations of their outputs. Starting from human-centric principles of beneficence and equity provides a framework from which to have these conversations.