This presentation explores how artificial intelligence can moderate toxic behavior in online gaming communities through data mining and natural language processing (NLP). Based on the 2025 paper “A Comparative Study on Toxicity Detection in Gaming Chats Using Machine Learning and Large Language Models” by Yehor Tereshchenko and Mika Hämäläinen, the slides compare traditional machine learning models, fine-tuned transformer architectures like DistilBERT, and modern large language models (LLMs).
It highlights model accuracy, computational cost, and latency trade-offs, explaining how hybrid human-AI moderation systems can balance fairness with efficiency. The presentation also connects the research to CMPE 255 data-mining concepts such as pattern recognition, clustering, anomaly detection, and dimensionality reduction.