Skip to Main Content

Artificial Intelligence

This subject guide provides information and suggested readings and resources on the topic of Artificial Intelligence

Hallucinations

Hallucinations refer to instances where AI technologies, particularly large language models, generate information that deviates from reality. The information in these outputs can be false, inaccurate, nonsensical, or entirely fabricated. These outputs can be surreal in appearance, similar to how humans may see figures in clouds or faces in inanimate objects. 

AI hallucinations occur due to various factors such as overfitting, biased training data, inaccuracies in the model's decoding process, or high model complexity. 

 

Types of Hallucinations

  • Prompt Misinterpretations
    • A generated response may not answer or reflect the prompt.
       
  • Contradictory Statements
    • A generated statement may contain contradictions from one sentence to another.
       
  • Fictional Facts
    • AI will invent facts or present fictional statements as fact. 
       
  • Miscalculations
    • AI can provide incorrect answers and explanations for equations and math problems. purely numerical and word problems

  • Incorrect Word Counts
    • A generated response may not adhere to a user-specified word count. 
       
  • Invented Sources
    • When asked to provide a list of sources, Ai will generate realistic but fictional citations. 

 

Bias

Because AI is designed by humans and trained on human information, it exhibits human biases. The introduction of bias into a system may be purposeful--a program meant to model or predict human preferences--or unintentional, such as through the use of flawed training data. Bias can also develop out of a particular way in which an AI program is used. With the proliferation of AI comes the augmentation of existing biases. This reality poses a significant threat to individuals, groups, and society. Ensuring that AI technologies are beneficial, trustworthy, and safe requires careful consideration of the human limitations behind their creation. 

The National Institute of Standards and Technology stresses the importance of viewing AI through a socio-technical lens, taking into account "the values and behavior modeled from the datasets, the humans who interact with [AI technologies], and the complex organizational factors that go into [AI's] commission, design, development, and ultimate deployment."

Systemic

Systemic biases result from procedures and practices of particular institutions that result in certain social groups being advantaged or favored and others being disadvantaged or devalued. Systemic bias is also referred to as institutional or historical bias.

  • May or may not reflect conscious prejudice or discrimination
  • Institutional racism and sexism are the most common examples
  • Another example is a lack of universal design principles hindering accessibility for persons with disabilities.

Statistical/Computational
Statistical and computational biases stem from errors that result when the sample is not representative of the population.

Result from systematic, not random, error and can occur in the absence of prejudice, partiality, or discriminatory intent [97].

Often arise when algorithms are unable to extrapolate beyond their training datatype or dataset.

Possible Causes:

heterogeneous data
wrong data
representation of complex data in simpler mathematical representations,
over- and under-fitting, the treatment of outliers, and data cleaning and imputation factors.

Human

Human biases reflect systematic errors in human thought (cognitive bias) and unconscious judgments or attitudes about people or groups (implicit bias).

Cognitive biases can be the result of limited heuristics. Heuristics are experience-based practical mental shortcuts that generally achieve quick, approximate solutions to problems that would otherwise take time and much mental effort to solve precisely. 

"Common sense" is a form of heuristics that tends to serve people well. "Do not touch the stove coils when they are red."

A problem with common sense, however, is that not every situation has a simple, obvious solution. Employing common sense heuristics can prevent one from recognizing the complexity of a situation. "Forest fires are incredibly destructive and should aggressively extinguish them all." (except that: Many plant and tree species actually require periodic fires for their seeds to germinate and reproduce; Low-intensity fires help clear out dead vegetation, which if left to accumulate, will fuel even more catastrophic wildfires; Forest fires recycle nutrients back into the soil and promote biodiversity.)

There is a wide variety of human biases (See ACAPS Technical Brief on Cognitive Biases for some examples). Cognitive and perceptual biases show themselves in all domains and are not unique to human interactions with AI. Rather, they are a fundamental part of the human mind.

Consequences 

Although this graphic focuses on health disparities, its modeling of the interplay of World-Data-Design-Use is applicable to any form of systemic discrimination or injustice. 

 

Recommended Reading