Turn the tables: How we use GPT to detect phishing websites
During the last year, discussions have been ongoing about the threat that large language models like the recently published GPT from OpenAI pose with regards to cyber security. Attackers could
use “AI” to create malware or new attack vectors like social engineering with greater proficiency. As such, the cyber security community at large expects a rise of Phishing attacks that would benefit
from such services. Phishing websites are easy to create and thus a vast amount come online on a daily basis. This prompts for an acceleration of the analysis and classification process for potential
phishing sites. In our research we try to turn the tables with regards to using machine learning and large language models to detect and successfully block phishing sites.
Large language models are suitable to identify common structures within text-based datasets. Since the concept of phishing sites stayed the same over the recent years a pre-trained model could be of long-term use without the necessity of re-training. We show that large language models can also learn the variations of phishing sites without the need to visually classify the sites, as well known applications in the past have done. This helps with identifying phishing sites even when the attackers change their modus operandi or target market, shifting from i.e. specific local banking sites to grabbing online service tokens.
We utilized OpenAI’s API to finetune several models based on GPT-3. The evaluation of our classification system shows an F1 Score of 0.92 with a phishing certainty threshold of 90%. In particular, the system is able to identify phishing with a high precision. The classification is modular and based on nine methods to extract relevant features from DOMs. After an initial evaluation of these DOM features we combined them into a robust ensemble classifier to efficiently distinguish phishing from clean sites. We based this on 4020 training and 1980 testing URLs gathered from internal sources which were labelled manually before. The cost saving in this approach is remarkable, we spend less than 20$ to train all our models and each classification costs on average 0.001$. In our research we show that this approach can be scaled easily to track a large number of sites and classify them in a short time.
For testing purposes we conducted a real world case-study with 1900 real world URLs from a phishing feed. Thereby we could evaluate the quality of this feed for our production systems. This shows how our system can be useful in relation to real world automation opportunities that reduce time and cost for human analysts and enable systems with mass analysis capabilities of phishing URLs.
In this paper we present our procedure to work with the OpenAI API, describe common limi- tations when working with data from phishing sites, compare our different representation methods and finally evaluate our results. We also show that using LLMs is an effective way to combat cyber criminals around the world while saving costs on manual analysis and classification.
Mr. Eduard Alles
Eduard Alles studied in Bochum at the Ruhr-University. He wrote his masters thesis about “Automatic Decryption After a Ransomware Attack by an AV-Solution” and work since 2022 as a Virus Analyst at G DATA CyberDefense AG. During his work he focuses mainly on Threat Hunting and detection of browser-based attacks.
Mr. Marius Benthin
Since November 2022 Marius Benthin is working as a Junior Virus Analyst at G DATA CyberDefense AG. He finished his master studies in IT Security at the Ruhr University Bochum with his master thesis about malware attribution to APT actors. In March 2020 he completed his dual studies at the University of Applied Sciences Darmstadt in cooperation with G DATA.