Skip to Content
My MSU

Center for Equitable Artificial Intelligence and Machine Learning Systems


Affiliated Research

AI/ML Student and Teacher Enrichment Program: (STEP) : A Secondary and Undergraduate STEAM Project

PI:
Cecelia Wright Brown, PhD

Co-PI(s):
Kevin Peters, PhD

Dept. or Schools:
School of Engineering 

Total Funding:

Agency:

Affiliation:
Affiliated Faculty

Ethical Considerations of AI-Driven Decision-Making Systems on Marginalized Communities

PI:
Dawn Thurman, PhD

Co-PI(s):
Rhonda Wells-Wilbon, PhD

Dept. or Schools:
School of Social Work

Total Funding:

Agency:

Affiliation:
Affiliated Faculty

Exploring the Ethical implications of Artificial Intelligence in Social Work Education

PI:
Dawn Thurman, PhD

Co-PI(s):
Rhonda Wells-Wilbon, PhD

Dept. or Schools:
School of Social Work

Total Funding:

Agency:

Affiliation:
Affiliated Faculty

AI for Changing Climate and Environmental Sustainability

PI:
Samendra Sherchan, PhD

Co-PI(s):
Md Mahmudur Rahman, PhD

Dept. or Schools:
School of Computer, Mathematical and Natural Sciences

Total Funding:

Agency:

Affiliation:
Affiliated Faculty

Physics-informed neural networks based on fixed-stress splitting iterative method for solving poroelastic model

PI:
Mingchao Cai, PhD

Co-PI(s):

Dept. or Schools:
School of Engineering

Total Funding:

Agency:

Affiliation:
Affiliated Faculty

Published Article:

Physics-Informed Neural Networks and Fixed-Stress Splitting For Biot's Model Solution 

AI and Physics-informed Machine Learning Applications in Communication Systems

PI:
Arlene Cole-Rhodes, Electrical Engineering

Co-PI(s):

Dept. or Schools:
School of Engineering

Total Funding:

Agency:

Affiliation:
Affiliated Faculty

Published Article : 

AI for Personalized Medicine

PI:
Fahmi Khalifa, Electrical & Computer Engineering

Co-PI(s):

Dept. or Schools:
School of Engineering

Total Funding:

Agency:

Affiliation:
Affiliated Faculty

Exploring Algorithmic Bias in Conversational AI

PI:
Naja Mack, Computer Science

Co-PI(s):

Dept. or Schools:
School of Computer, Mathematical & Natural Sciences

Total Funding:

Agency:

Affiliation:
Affiliated Faculty

Building for Health Equity through Artificial Intelligence and Machine Learning at Morgan State University

PI:
Kim Sydnor, Public Health & Policy

Co-PI(s):
Kofi Nyarko, Electrical Engineering

Dept. or Schools:
School of Community Health Policy
Electrical & Computer Engineering

Total Funding:
$204,830

Agency:
NIH

Affiliation:
Affiliated

Characterization of health disparities in African ancestry and reduction of algorithmic bias

PI:
Pilhwa Lee, Mathematics

Co-PI(s):
Daniel Brunson, Philosophy & Religious Studies
Kofi Nyarko, Electrical Engineering

Dept. or Schools:
School of Computer, Mathematical & Natural Sciences

Total Funding:
$300,939

Agency:
NIH

Affiliation:
Affiliated

Algorithmic bias detection and fairness benchmarking for cloud-based AI and Machine Learning systems

PI:
Peter Taiwo, Electrical & Computer Engineering

Co-PI(s):

Dept. or Schools:
School of Engineering

Total Funding:

Agency:

Affiliation:
Affiliated Faculty

Project Description:

Overview:
Algorithmic bias in cloud-based AI applications is not just a theoretical concern but a significant challenge across diverse applications, including facial recognition, financial decision-making, public resource allocation, and medical diagnostics. This project is dedicated to developing methods to detect and accurately identify and address these biases. By employing diverse analytical tools, we aim to assess and mitigate disparities within these AI systems, thereby ensuring equitable and transparent decisions. Our research also aims to combine comprehensive fairness benchmarks with innovative techniques such as model reprogramming to enhance AI functionality where direct model access is limited, making this work a crucial step in the fight against algorithmic bias.

Objectives and Significance:
Our primary objective is to standardize and refine methods for detecting and then identifying algorithmic biases across various AI systems, promoting consistency in fairness evaluations. By focusing on the detection of biased models and subsequently analyzing their specific characteristics, we aim to explore model reprogramming as a method to remotely modify these models, enhancing their fairness. This strategic approach strengthens the integrity of AI systems and meets emerging regulatory demands for transparency and accountability in automated decision-making.

Impacts:
Establishing industry standards for detecting and addressing algorithmic biases will fundamentally advance ethical AI practices. These efforts will advance the development of AI systems that operate equitably across a wide range of applications, thereby reducing biases and enhancing societal trust in technology. These initiatives are expected to influence policy, set standards, and provide educational resources and tools that can be used by data scientists and developers to build more equitable AI systems.

Excellence in Research: Building an Equitable and Sustainable Logistics System in Rural Areas with Drone

PI:
Ziping Wang, Information Systems

Co-PI(s):
Kofi Nyarko, Electrical Engineering
Xiazheng He

Dept. or Schools:
INNS
Electrical & Computer Engineering

Total Funding:
$423,102

Agency:
NSF

Affiliation:
Affiliated

A Methodology for the Development of Cognitive Twins to Predict Behaviors & Bias

PI:
Gabriella Waters, Psychology

Co-PI(s):
Kofi Nyarko, Electrical Engineering
Justin Bonny, Psychology, Psychometrics

Dept. or Schools:
CLA

Total Funding:

Agency:
CEAMLS

Affiliation:
Affiliated Faculty

Project Description:

Introduction:
Predicting behaviors and bias is an area of great importance as we continue to observe negative health outcomes for certain populations, lethal interactions with law enforcement, inequitable punishments in educational settings, the performance of soldiers, and more. The ability to predict and potentially mitigate the effects of bias and other behaviors has far-reaching impacts on the way our society functions, and the overall safety of every citizen. This research seeks to develop machine learning models that will be trained on datasets from various industries to mimic the choices and predict the behavior and/or potential bias of individuals.

This research requires collaboration from two departments at the university, Computer Science and Psychometrics to be successful. Cognitive science/Psychometrics and computer science/engineering are the vital components necessary to create the appropriate behavioral assessment tools, develop the correct ML algorithms, and deploy and test the cognitive twins created from the datasets.

Problem Statement:
Industries as a whole have agreed that bias is not desirable and that the ability to predict behaviors is valuable. Many organizations employ personality tests and other types of behavioral assessments to determine how a team member will perform, or to make assumptions on what to expect when interacting with them. These assessments are typically completed with responses that participants may feel are appropriate and are not necessarily accurate representations of a person’s actual beliefs.

How can we reveal the most authentic behavior profile of a person?
How can we target bias and predict it?
What are the limitations of current technology and behavioral assessments?

Objective:
The long-term goal of this research is to develop cognitive twins that predict individual behaviors and biases. These cognitive twins will be trained to think like and respond as the individuals they were trained on. The inference process of ML models will be visualized for explainability, debugging/improvements, comparison & selection, and teaching concepts. Model architecture, learned parameters, and model metrics are the main areas of focus for visualization both during and after training. The resultant twins will be tested in various scenarios and their inferences and performance will be analyzed. Model training and model inference will guide refinements as appropriate.


Fairness in and beyond algorithms: a science studies approach to fair ML

PI:
Phillip Honenberger, Philosophy & Religious Studies 

Co-PI(s):
College of Liberal Arts 

Dept. or Schools:

Total Funding:

Agency:

Affiliation:
Affiliated Faculty


Project Description:

Project Motivation:
“AI Ethics” is now a major research area, linking STEM, humanities, and social science researchers, as well as (in aspiration at least) policy makers and the wider public. Despite some commonality of concern, however, approaches to AI ethics often exhibit substantial differences along disciplinary lines. The topic of algorithmic justice – equity, fairness, bias, and the like – is an instructive case. Some researchers pursue algorithmic methods for reducing bias and improving fairness (recounted in Barocas et al. 2019), or wrestle with quantitatively-expressible trade-offs between different criteria of fairness (Kleinberg 2016) or fairness in short-term versus long-term applications (Liu et al. 2018). Others, however, argue that efforts to achieve fairness primarily through adjustments to algorithms themselves can ignore, obfuscate, or even perpetuate social factors responsible for unfairness (Fazelpour & Lipton 2020). Relatedly, some call for a decentering from emphasis on algorithms within AI ethics in favor of a richer social and experiential contextualization of AI/ML and its applications (Birhane et al. 2022).

Within this framework of debate, the methodological resources of science studies – including under that heading the traditions of history and philosophy of science (HPS) and science, technology, and society (STS), among others – offer distinct advantages. Since its inception with figures like G. Sarton, A. Koyré, R. Merton, L. Fleck and others, science studies researchers have adopted a dual perspective: (1) internalist: close and sympathetic study of technical details of the sciences they seek to understand and illuminate; and (2) externalist: a variety of philosophical, historical, sociological, and interpretive lenses through which such details can be seen in their larger significance, including their connections (both as effect and cause) with social factors, events, and processes. Science studies researchers often (if controversially) take up the challenge of integrating these potentially opposed perspectives.

In this project, the PI, an experienced and accomplished science-studies researcher with a background in computer-assisted data analysis, will engage the topic of fairness and bias in ML algorithms from just such a dual internalist-and-externalist perspective (captured in the catchphrase “in and beyond algorithms”), seeking an original and integrated interpretive standpoint.

If the discussion of bias and fairness in AI ethics is to avoid a methodological sundering, such that social scientists, humanists, and ML researchers cease to speak the same language or seek answers to their questions in a way that can provide consistent policy guidance, then some such common standpoint or methodological framework as that offered by science studies must be cultivated. This project establishes and builds on this integrative standpoint through three mutually-reinforcing components: publication, teaching, and research community (described further in next section). 

Deliverables:
(1) A research paper on bias and fairness in ML that showcases an “in-and-beyond algorithms” approach to the topic. The central anticipated feature of the resulting paper is engagement with both the technical aspects of ML design processes and specific algorithms, on the one hand, and the personal and social factors that have informed these and are affected by them, on the other (as, for instance, in Ensmenger 2012).

(2) Development (in Fall 2023) and pilot teaching (in Spring 2024) of a new lower-division undergraduate course at Morgan State that focuses on AI, Machine Learning, and Data Science literacy – that is, that provides an introduction and overview of these topics for non-specialists, including critical and informed discussion of their more controversial aspects. The focus of the course will be twofold: (i) understanding the contemporary capabilities and functioning of AI/ML/data science at a technical level (albeit rudimentary and without prerequisites) through learning the history of these technologies; and (ii) critically engaging with the ethical and political ramifications and risks of these technologies, including issues of bias, fairness, responsibility, safety, privacy, transparency, and accountability.

(3) A working group on “humanities and social science of AI” that will meet twice monthly, in hybrid format (Zoom and in-person), to discuss recent papers on the ethics, epistemology, sociology, and politics of AI/ML. A special emphasis will be placed on simultaneously engaging technical details of specific algorithms or processes, and the broader social, political, and interpretive contextualization available from humanities and social science methods. A major aim of the group will be to draw participants from STEM as well as humanities and social science departments, facilitating cross-disciplinary learning. 

PI Biography:
Phillip Honenberger holds a PhD in Philosophy from Temple University. His work has appeared in Biology & Philosophy, Studies in History and Philosophy of Science, and Synthese, among other forums. His research has been funded by the National Science Foundation and the Consortium for History of Science, Technology, and Medicine, among others. He has also worked as MySQL database designer for several data analysis projects.

Works cited:
Barocas, S., M. Hardt, & A. Narayan. 2019. Fairness & Machine Learning. https://fairmlbook.org/
Birhane, A. et al., 2022. “The Forgotten Margins of AI Ethics.” 2022 ACM Conference on Fairness, Accountability and Transparency (FAccT ’22).
Chouldechova, A. 2017. “Fair prediction with disparate impact.”
Ensmenger, N. 2012. “Is chess the drosophila of AI?: A social history of an algorithm.” Social Studies of Science 42 (1): 5-30.
Fazelpour, S. and Lipton, Z. 2020. “Algorithmic fairness from a non-ideal perspective.” AIES 2020.
Kleinberg, J., S. Mullainathan, M. Raghavan. 2016. “Inherent Trade-offs in the Fair Determination of Risk Scores.”
Liu, L. et al. 2018. “Delayed impact of fair machine learning.” Proceedings of the 25th International Conference on Machine Learning.