Exploring the Security and Privacy Risks of Chatbots in Messaging Services

Jide Edu, Cliona Mulligan, Fabio Pierazzi, Jason Polakis, Guillermo Suarez-Tangil, Jose Such

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

338 Downloads (Pure)


The unprecedented adoption of messaging platforms for work and recreation has made it an attractive target for malicious actors. In this context, third-party apps (so-called chatbots) offer a variety of attractive functionalities that support the experience in large channels. Unfortunately, under the current permission and deployment models, chatbots in messaging systems could steal information from channels without the victim’s awareness. In this paper, we propose a methodology that incorporates static and dynamic analysis for automatically assessing security and privacy issues in messaging platform chatbots. We also provide preliminary findings from the popular Discord platform highlighting the risks chatbots pose to users. Unlike other popular platforms like Slack or MS Teams, Discord does not implement user-permission checks—a task entrusted to third-party developers. Among others, we find that 55% of chatbots from a leading Discord repository request the “administrator” permission, and only 4.35% of chatbots with permissions actually provide a privacy policy
Original languageEnglish
Title of host publicationProceedings of the 22nd ACM Internet Measurement Conference (IMC '22), October 25--27, 2022, Nice, France
Subtitle of host publicationACM
ISBN (Electronic)978-1-4503-9259-4/22/10
Publication statusAccepted/In press - 19 Sept 2022


Dive into the research topics of 'Exploring the Security and Privacy Risks of Chatbots in Messaging Services'. Together they form a unique fingerprint.

Cite this