An Educational and Validation Tool for Cyber Threat Intelligence Leveraging Large Language Models

Stiven Janku, Hannan Xiao*, Timmy Caris

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

27 Downloads (Pure)

Abstract

Cyber Threat Intelligence (CTI) has always played a pivotal role in proactive cybersecurity. However, with the emergence of Large Language Models (LLMs), generating and disseminating false or misleading CTI has never been easier. Existing research has found that fabricated CTIs could successfully evade cybersecurity professionals, but there is a notable gap in detecting fabricated CTIs. This paper addresses how LLM-based approaches can serve as a powerful tool for validating the authenticity of reported threats. We propose a framework for evaluating text-based intelligence through a structured ranking of sources, automated keyword extraction, and a final AI-based analysis that yields a probability score to identify potential misinformation. Our evaluation using 150 CTI reports (authentic, LLM-generated, and hybrid) demonstrates strong classification performance with an overall F1-score of 0.88, achieving particularly high accuracy for completely fabricated reports while identifying partially manipulated content with moderate success. Beyond technical validation, VeraCTI serves as an educational platform for cybersecurity practitioners through its transparent, step-by-step analysis process, which can be deployed in Security Operations Centres (SOCs) to simultaneously enhance threat verification capabilities and develop analysts’ critical assessment skills. By operating on the principle that ”all information is false until proven”, VeraCTI addresses a critical gap in current CTI validation approaches and demonstrates how AI systems can be leveraged responsibly to counter AI-generated misinformation.
Original languageEnglish
Title of host publication3rd International Workshop on Cyber Security Education for Industry and Academia (CSE4IA 2025)
Publication statusAccepted/In press - May 2025

Fingerprint

Dive into the research topics of 'An Educational and Validation Tool for Cyber Threat Intelligence Leveraging Large Language Models'. Together they form a unique fingerprint.

Cite this