Researchers Tackle AI Biases in Loan Approvals and Discounts, Aim for Fairer Outcomes
November 8, 2024Artificial intelligence (AI) is increasingly integrated into critical decision-making processes, influencing areas such as sentencing for offenders, approving car purchases, and targeting advertisements.
Researchers from the University of Iowa and Texas A&M are investigating biases in AI models, particularly in their applications for loan decisions and online discounts.
These AI models utilize machine learning to analyze data, which can inadvertently introduce biases that affect decisions based on gender, race, ethnicity, and age.
Their findings reveal that AI models tend to favor individuals from statistically advantaged groups, resulting in unfair outcomes in significant areas like loans and mortgages.
To combat these biases, Qihang Lin proposed a method for creating a balanced list of individuals, although this approach may sacrifice relevance for certain groups.
Tianbao Yang emphasized the necessity of categorizing individuals in a manner that prioritizes fairness alongside the accuracy of AI models.
Thiago Serra noted the rapid expansion of AI applications due to the availability of vast data, but cautioned that careful management is essential to prevent misuse.
Historical contexts, such as AI winters, serve as reminders for researchers and businesses about the need for sustainable expectations and ongoing funding for AI development.
In 2022, Lin and Yang received an $800,000 grant from the National Science Foundation to support their research on AI biases.
The researchers aim to adjust AI models to promote a more equitable distribution of discounts and loan approvals by developing a diverse input dataset.
The research underscores that not all AI models function effectively in the same manner, necessitating tailored approaches to achieve equitable outcomes.
Summary based on 1 source
Get a daily email with more AI stories
Source
The Daily Iowan • Nov 8, 2024
Iowa researchers work to identify bias in AI models