Abstract
The unprecedented adoption of messaging platforms for work and recreation has made it an attractive target for malicious actors. In this context, third-party apps (so-called chatbots) offer a variety of attractive functionalities that support the experience in large channels. Unfortunately, under the current permission and deployment models, chatbots in messaging systems could steal information from channels without the victim’s awareness. In this paper, we propose a methodology that incorporates static and dynamic analysis for automatically assessing security and privacy issues in messaging platform chatbots. We also provide preliminary findings from the popular Discord platform highlighting the risks chatbots pose to users. Unlike other popular platforms like Slack or MS Teams, Discord does not implement user-permission checks—a task entrusted to third-party developers. Among others, we find that 55% of chatbots from a leading Discord repository request the “administrator” permission, and only 4.35% of chatbots with permissions actually provide a privacy policy
Original language | English |
---|---|
Title of host publication | Proceedings of the 22nd ACM Internet Measurement Conference (IMC '22), October 25--27, 2022, Nice, France |
Subtitle of host publication | ACM |
ISBN (Electronic) | 978-1-4503-9259-4/22/10 |
Publication status | Accepted/In press - 19 Sept 2022 |