International Journal For Multidisciplinary Research
E-ISSN: 2582-2160
•
Impact Factor: 9.24
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Home
Research Paper
Submit Research Paper
Publication Guidelines
Publication Charges
Upload Documents
Track Status / Pay Fees / Download Publication Certi.
Editors & Reviewers
View All
Join as a Reviewer
Reviewer Referral Program
Get Membership Certificate
Current Issue
Publication Archive
Conference
Publishing Conf. with IJFMR
Upcoming Conference(s) ↓
WSMCDD-2025
GSMCDD-2025
Conferences Published ↓
RBS:RH-COVID-19 (2023)
ICMRS'23
PIPRDA-2023
Contact Us
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 6 Issue 6
November-December 2024
Indexing Partners
Fine-tuning Pre-trained Language Models to Detect In-game Trash Talks
Author(s) | Daniel Fesalbon, Arvin De La Cruz, Marvin Mallari, Nelson Rodelas |
---|---|
Country | Philippines |
Abstract | Common problems in playing online mobile and computer games were related to toxic behavior and abusive communication among players. Based on different reports and studies, the study also discusses the impact of online hate speech and toxicity on players' in-game performance and overall well-being. This study investigates the capability of pre-trained language models to classify or detect trash talk or toxic in-game messages. The study employs and evaluates the performance of pre-trained BERT and GPT language models in detecting toxicity within in-game chats. Using publicly available APIs, in-game chat data from DOTA 2 game matches were collected, processed, reviewed, and labeled as non-toxic, mild (toxicity), and toxic. The study was able to collect around two thousand in-game chats to train and test BERT (Base-uncased), BERT (Large-uncased), and GPT-3 models. Based on the three models’ state-of-the-art performance, this study concludes pre-trained language models’ promising potential for addressing online hate speech and in-game insulting trash talk. |
Keywords | BERT, GPT, In-game Trash Talks, Toxic Chat Detection |
Field | Computer > Artificial Intelligence / Simulation / Virtual Reality |
Published In | Volume 6, Issue 2, March-April 2024 |
Published On | 2024-03-13 |
Cite This | Fine-tuning Pre-trained Language Models to Detect In-game Trash Talks - Daniel Fesalbon, Arvin De La Cruz, Marvin Mallari, Nelson Rodelas - IJFMR Volume 6, Issue 2, March-April 2024. DOI 10.36948/ijfmr.2024.v06i02.14927 |
DOI | https://doi.org/10.36948/ijfmr.2024.v06i02.14927 |
Short DOI | https://doi.org/gtmzsg |
Share this
E-ISSN 2582-2160
doi
CrossRef DOI is assigned to each research paper published in our journal.
IJFMR DOI prefix is
10.36948/ijfmr
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.