International Journal For Multidisciplinary Research
E-ISSN: 2582-2160
•
Impact Factor: 9.24
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Home
Research Paper
Submit Research Paper
Publication Guidelines
Publication Charges
Upload Documents
Track Status / Pay Fees / Download Publication Certi.
Editors & Reviewers
View All
Join as a Reviewer
Reviewer Referral Program
Get Membership Certificate
Current Issue
Publication Archive
Conference
Publishing Conf. with IJFMR
Upcoming Conference(s) ↓
WSMCDD-2025
GSMCDD-2025
Conferences Published ↓
RBS:RH-COVID-19 (2023)
ICMRS'23
PIPRDA-2023
Contact Us
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 6 Issue 6
November-December 2024
Indexing Partners
Toward Fair NLP Models: Bias Detection and Mitigation in Cloud-Based Text Mining Services
Author(s) | Devashish Bornare, Mandar Zade, Shivpratap Jadhav, Rohit Mohite, Anuja Chincholkar |
---|---|
Country | India |
Abstract | As Natural Language Processing (NLP) increasingly becomes essential across various applications, the challenge of bias within these models has attracted considerable scrutiny. Cloud-based text mining services offered by platforms such as Google Cloud, AWS, and Microsoft Azure have made NLP technologies more accessible, allowing businesses and developers to utilize advanced language models. Nevertheless, the existence of biases—be they gender, racial, or socioeconomic—in these models raises significant concerns regarding fairness and equity in automated decision-making processes. This paper examines the pressing issue of bias in cloud-based NLP models, investigating methods for both identifying and alleviating such biases. We analyze current approaches to bias detection, which include dataset evaluation, fairness metrics, and algorithmic audits, and we review techniques for bias mitigation at various stages of the NLP pipeline, from data preprocessing to the post-processing of model outputs. Particular emphasis is placed on the challenges presented by the opaque nature of cloud services, which can obscure model behaviour and impede transparency. The paper concludes with suggestions for incorporating bias mitigation strategies into cloud-based NLP systems to enhance fairness, uphold ethical standards, and ensure responsible AI practices. |
Keywords | Natural Language Processing (NLP), Bias detection, Cloud-based text mining services, Google Cloud, AWS, Microsoft Azure |
Field | Computer > Artificial Intelligence / Simulation / Virtual Reality |
Published In | Volume 6, Issue 6, November-December 2024 |
Published On | 2024-11-24 |
Cite This | Toward Fair NLP Models: Bias Detection and Mitigation in Cloud-Based Text Mining Services - Devashish Bornare, Mandar Zade, Shivpratap Jadhav, Rohit Mohite, Anuja Chincholkar - IJFMR Volume 6, Issue 6, November-December 2024. DOI 10.36948/ijfmr.2024.v06i06.30703 |
DOI | https://doi.org/10.36948/ijfmr.2024.v06i06.30703 |
Short DOI | https://doi.org/g8r8kw |
Share this
E-ISSN 2582-2160
doi
CrossRef DOI is assigned to each research paper published in our journal.
IJFMR DOI prefix is
10.36948/ijfmr
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.