Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022;24(5):1465-1481.
doi: 10.1007/s10796-021-10156-2. Epub 2021 Jun 20.

Questioning Racial and Gender Bias in AI-based Recommendations: Do Espoused National Cultural Values Matter?

Affiliations

Questioning Racial and Gender Bias in AI-based Recommendations: Do Espoused National Cultural Values Matter?

Manjul Gupta et al. Inf Syst Front. 2022.

Abstract

One realm of AI, recommender systems have attracted significant research attention due to concerns about its devastating effects to society's most vulnerable and marginalised communities. Both media press and academic literature provide compelling evidence that AI-based recommendations help to perpetuate and exacerbate racial and gender biases. Yet, there is limited knowledge about the extent to which individuals might question AI-based recommendations when perceived as biased. To address this gap in knowledge, we investigate the effects of espoused national cultural values on AI questionability, by examining how individuals might question AI-based recommendations due to perceived racial or gender bias. Data collected from 387 survey respondents in the United States indicate that individuals with espoused national cultural values associated to collectivism, masculinity and uncertainty avoidance are more likely to question biased AI-based recommendations. This study advances understanding of how cultural values affect AI questionability due to perceived bias and it contributes to current academic discourse about the need to hold AI accountable.

Keywords: Algorithmic bias; Artificial intelligence; Culture; Ethical AI; Gender bias; Racial bias; Recommender systems; Responsible AI.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Scenario-based research model

References

    1. Abdollahpouri, H., Adomavicius, G., Burke, R., Guy, I., Jannach, D., Kamishima, T., ... Pizzato, L. (2019). Beyond personalization: Research directions in multistakeholder recommendation. arXiv preprint arXiv:1905.01986.
    1. Ågerfalk PJ. Artificial intelligence as digital agency. European Journal of Information Systems. 2020;29(1):1–8. doi: 10.1080/0960085X.2020.1721947. - DOI
    1. Agrawal N, Maheswaran D. The effects of self-construal and commitment on persuasion. Journal of Consumer Research. 2005;31(4):841–849. doi: 10.1086/426620. - DOI
    1. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias: there’s software used across the country to predict future criminals. and it’s biased against blacks. ProPublica. Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-crim.... Accessed 30 March 2021.
    1. Araya, A. A. (1995). Questioning ubiquitous computing. Paper presented at the Proceedings of the 1995 ACM 23rd annual conference on Computer science.

LinkOut - more resources