During my PhD I conducted 5 main research projects with Phishalytics:
With more than 500 million daily tweets from over 330 million active users, Twitter constantly attracts malicious users aiming to carry out phishing and malware-related attacks against its user base. Therefore it is vital to assess the effectiveness of Twitter's use of blacklists in protecting its users from such threats.
We collected more than 182 million public tweets containing URLs from Twitter's Stream API over a 2-month period and compared these URLs against 3 popular phishing, social engineering, and malware blacklists, including Google Safe Browsing (GSB). We focus on the delay period between an attack URL first being tweeted to appearing on a blacklist, as this is the timeframe in which blacklists do not warn users, leaving them vulnerable.
Experiments show that, whilst GSB is effective at blocking a number of social engineering and malicious URLs within 6 hours of being tweeted, a significant number of URLs go undetected for at least 20 days. For instance, during one month, we discovered 4,930 tweets containing URLs leading to social engineering websites that had been tweeted to over 131 million Twitter users. We also discovered 1,126 tweets containing 376 blacklisted Bitly URLs that had a combined total of 991,012 clicks, posing serious security and privacy threats. In addition, an equally large number of URLs contained within public tweets remain in GSB for at least 150 days, raising questions about potential false positives in the blacklist. We also provide evidence to suggest that Twitter may no longer be using GSB to protect its users.
This study investigates how effective Twitter's URL shortening service (t.co) is at protecting users from phishing and malware attacks. We show that over 10,000 unique blacklisted phishing and malware URLs were posted to Twitter during a 2-month timeframe in 2017. This lead to over 1.6 million clicks which came directly from Twitter users -- therefore exposing people to potentially harmful cyber attacks. However, existing research does not explore if blacklisted URLs are blocked by Twitter at time of click.
Our study investigates Twitter's URL shortening service to examine the impact of filtering blacklisted URLs that are posted to the social network. We show an overall reduction in the number of blacklisted phishing and malware URLs posted to Twitter in 2018-19 compared to 2017, suggesting an improvement in Twitter's effectiveness at blocking blacklisted URLs at time of tweet. However, only about 12% of these tweeted blacklisted URLs -- which were not blocked at time of tweet and therefore posted to the platform -- were blocked by Twitter in 2018-19.
Our results indicate that, despite a reduction in the number of blacklisted URLs at time of tweet, Twitter's URL shortener is not particularly effective at filtering phishing and malware URLs -- therefore people are still exposed to these cyber attacks on Twitter.
Blacklists play a vital role in protecting internet users against phishing attacks. The effectiveness of blacklists depends on their size, scope, update speed and frequency, and accuracy -- among other characteristics. In this study we present a measurement study that analyses 3 key phishing blacklists: Google Safe Browsing (GSB), OpenPhish (OP), and PhishTank (PT). We investigate the uptake, dropout, typical lifetimes, and overlap of URLs in these blacklists.
During our 75-day measurement period we observe that GSB contains, on average, 1.6 million URLs, compared to 12,433 in PT and 3,861 in OP. We see that OP removes a significant proportion of its URLs after 5 and 7 days, with none remaining after 21 days -- potentially limiting the blacklist's effectiveness. We observe fewer URLs residing in all 3 blacklists as time-since-blacklisted increases -- suggesting that phishing URLs are often short-lived.
None of the 3 blacklists enforce a one-time-only URL policy -- therefore protecting users against reoffending phishing websites. Across all 3 blacklists, we detect a significant number of URLs that reappear within 1 day of removal -- perhaps suggesting premature removal or re-emerging threats. Finally, we discover 11,603 unique URLs residing in both PT and OP -- a 12% overlap. Despite its smaller average size, OP detected over 90%of these overlapping URLs before PT did.
One of our early research aims was to determine blacklist delays by leveraging a source of "fresh phish" to conduct a lifecycle analysis study. Our methodology involved creating a machine learning classifier to automatically detect tweets that contained phishing URLs. These URLs would be frequently checked for blacklist membership. Delay times could then be calculated on these blacklisted URLs.
We designed a machine learning classifier, based on PhishAri (Aggarwal et al 2012), to detect tweets that contain suspected phishing URLs. We built a 23-feature and 6-feature model which use the random forest algorithm for classification. The accuracy of our 23-feature model classifier to predict phishing tweets is as follows: 28% sensitivity (or true positive rate), 99% specificity (or true negative rate), 0.00016% fall-out (or false positive rate), and 72% miss-rate (or false negative rate). This gives an overall accuracy of 99%. The precision (positive predictive value) is 71% on phishing classification and 99% on benign classification. We designed the weighting of our classifier to increase the accuracy of the specificity (true negative rate) due to the weighted nature of our dataset.
The aim of this study is to investigate the effectiveness of web browsers' built-in phishing detection technology to assess how well-protected users are from phishing attacks.
Most web browsers (Chrome, Safari, Firefox, Opera, and Vivaldi) have the Google Safe Browsing (GSB) blacklist built in. This is designed so that users are protected from phishing and malware attacks. Our methodology to measure web browser phishing detection effectiveness involves using a different data set of blacklisted phishing URLs from GSB. Our data set of testing URLs comprises the PT and OP blacklists to determine web browser detection rates for URLs that are blacklisted and not-blacklisted by GSB at time of test. This methodology allows us to investigate web browser phishing detection rates for both known and unknown phishing websites -- therefore determining web browsers' heuristic and blacklisted phishing detection rates.
Key results show that web browsers are reasonably effective at blocking known (i.e. blacklisted) phishing attacks. However, web browses missed up to 62% of 0-day (i.e. non-blacklisted) phishing attacks. Additionally, users are not completely protected against known phishing attacks, since blacklists take time to update -- which can create a window of opportunity for attackers