Tag Archives: Messagelabs Limited

Social Media and Politics: Truthiness and Astroturfing

By Jeremy Wagstaff

(this is a column I wrote back in November. I’m repeating it here because of connections to astroturing in the HBGary/Anonymous case.)

Just how social is social media? By which I mean: Can we trust it as a measure of what people think, what they may buy, how they may vote? Or is it as easy a place to manipulate as the real world.

The answers to these questions aren’t of academic interest only. They go right to the heart of what may be our future. More and more of our world is online. And more and more of our online world is social media: A quarter of web pages viewed in the U.S. are on Facebook. So it’s not been lost on those who care about such things that a) what we say online may add up to be a useful predictor of what we may do at the shops, the movies, at the polling booth. And b) that social media is a worthwhile place to try to manipulate what we think, and what we do at the shops, the movies—and at the ballot box.

There is plenty of evidence supporting the former. Counting the number of followers a candidate has on Facebook, for example, is apparently a pretty good indicator of whether they’ll do well at the ballot box. The Daily Beast set up something called the Oracle which scanned 40,000 websites—including Twitter—to measure whether comments on candidates in the recent U.S. elections were positive, negative, neutral or mixed. It predicted 36 out of 37 Senate races and 29 out of 30 Governors’ races and nearly 98% of the House races. That’s pretty good.

Dan Zarrella, a self-styled social media scientist, counted the followers of the twitter feeds of 30 senate, house and governor races and found that in 71% of the races, the candidate with the most Twitter followers was ahead in the polls. And Facebook found that candidates with more Facebook fans than their opponents won 74% of House races, and 81% of Senate races. More than 12 million people used the “I Voted” button this year, more than double that in 2008.

Why is this interesting? Well, social media, it turns out, is quite a different beast to even recent phenomena such as blogs. Social media, it turns out, really is social, in that more than previous Internet methods of communication, it reflects the views of the people using it. It is, one might say, democratic.

A study by researchers from the Technical University of Munich of the 2009 federal parliamentary elections in Germany, for example, revealed that, in contrast to the bulletin boards and blogs of the past, Twitter was reflective of the way Germans voted. Unlike bulletin boards and blogs, they wrote, “heavy users were unable to impose their political sentiment on the discussion.” The large number of participants, they found, “make the information stream as a whole more representative of the electorate.”

In other words, social media is as much a battleground for hearts and minds as the rest of the world. Even more so, perhaps, because it’s easier to reach people. Forget knocking on doors or holding rallies: Just build a Facebook page or tweet.

And, maybe, hire some political operators to build a fake movement, aka astroturfing?

Astroturfing, for those not familiar with the term, is the opposite of grassroots. If you lack the support of ordinary people, or don’t have time to get it, you can still fake it. Just make it look like you’ve got grassroots support. Since the term was coined in the mid 1980s it’s become popular activity by marketers, political operators and governments (think Chinese 50-cent blogging army). Astroturfing, in short, allows a politician to seem a lot more popular than he really is by paying folk to say how great he is.

Whether social media is ripe for astroturfing isn’t clear. On one hand, we know that the Internet is full of fakery and flummery: Just because your inbox is no longer full of spam doesn’t mean the Internet isn’t full of it—87%, according to the latest figures from MessageLabs. You don’t see it because the filters are getting better at keeping it away from you. Twitter, by contrast, is much less spammy: the latest figures from Twitter suggest that after some tweaks earlier this year the percentage of unwanted messages on the service is about 1%.

So Twitter isn’t spammy, and it broadly reflects the electorate. But can it be gamed?

We already know that Twitter can spread an idea, or meme, rapidly—only four hops are needed before more or less everyone on Twitter sees it. In late 2009 Google unveiled a new product: Real time search. This meant that, atop the usual results to a search, Google would throw in the latest matches from the real time web—in other words, Twitter and its ilk. So getting your tweets up there would be valuable if, say, you were a political operator and you wanted people to hear good things about your candidate, or bad things about your rival. But were people doing this? Two researchers from Wellesley College in Massachusetts wondered.

Panagiotis Takis Metaxas and Eni Mustafaraj studied the local senate race and found that they were. They looked at 185,000 Twitter messages which mentioned the two competing candidates and found that there was plenty of astroturfing going on—where political supporters were creating fake accounts and repeating each other’s messages, and sending them to likely sympathizers, in the hope of their messages hitting the mainstream.

The researchers found one group, apparently linked to an Iowa Republican group, was sending out one tweet a second linking to websites “exposing” their rival’s missteps and misstatements. Overall, the message they sent reached more than 60,000 users. The researchers concluded that “the fact that a few minutes of work, using automated scripts and exploiting the open architecture of social networks such as twitter, makes possible reaching a large audience for free…raises concerns about the deliberate exploitation of the medium.”

The point here is not merely that you’re propagating a point of view. That’s just spam. But by setting up fake Twitter accounts and tweeting  and then repeating these messages, you’re creating the illusion that these views are widespread. We may ignore the first Twitter message we see exposing these views and linking to a website, but will we ignore the second or the third?

This discovery of Twitter astroturfing in one race has prompted researchers at Indiana University to set up a tool they call Truthy—after comedian Stephen Colbert’s term to describe something that someone knows intuitively from the gut—irrespective of evidence, logic or the facts. Their tool has exposed other similar attacks which, while not explosive in terms of growth, are, they wrote in an accompanying paper,  “nevertheless clear examples of coordinated attempts to deceive Twitter users.” And, they point out, the danger with these Twitter messages is that unless they’re caught early, “once one of these attempts is successful at gaining the attention of the community, it will quickly become indistinguishable from an organic meme.”

This is all interesting, for several reasons. First off, it’s only in the past few months that we’ve woken up to what political operators seem to be doing on Twitter. Secondly, while none of these cases achieves viral levels, the relative ease with which these campaigns can be launched suggests that a lot more people will try them out. Thirdly, what does this tell us about the future of political manipulation in social media?

I don’t know, but it’s naïve to think that this is just an American thing. Or a ‘what do you expect in a thriving democracy?’ thing. Less democratically minded organizations and governments are becoming increasingly sophisticated about the way they use the Internet to control and influence public opinion. Evgeny Morozov points to the Lebanon’s Hezbollah, “whose suave manipulation of cyberspace was on display during the 2006 war with Israel”; my journalist friends in Afghanistan say the Taliban are more sophisticated about using the Internet than the Karzai government or NATO.

The good news is that researchers are pushing Twitter to improve their spam catching tools to stop this kind of thing from getting out of hand. But I guess the bigger lesson is this: While social media is an unprecedented window on, and reflection of, the populace, it is also an unprecedented opportunity for shysters, snake oil salesmen and political operators to manipulate what we think we know.

It may be a great channel for the truth, but truthiness may also be one step behind.

Social Media and Politics: Truthiness and Astroturfing

(This is a longer version of my syndicated newspaper column)

By Jeremy Wagstaff

Just how social is social media? By which I mean: Can we trust it as a measure of what people think, what they may buy, how they may vote? Or is it as easy a place to manipulate as the real world?

The answers to these questions aren’t of academic interest only. They go right to the heart of what may be our future. More and more of our world is online. And more and more of our online world is social media: A quarter of web pages viewed in the U.S. are on Facebook. So it’s not been lost on those who care about such things that a) what we say online may add up to be a useful predictor of what we may do at the shops, the movies, at the polling booth. And b) that social media is a worthwhile place to try to manipulate what we think, and what we do at the shops, the movies—and at the ballot box.

There is plenty of evidence supporting the former. Counting the number of followers a candidate has on Facebook, for example, is apparently a pretty good indicator of whether they’ll do well at the ballot box. The Daily Beast set up something called the Oracle which scanned 40,000 websites—including Twitter—to measure whether comments on candidates in the recent U.S. elections were positive, negative, neutral or mixed. It predicted 36 out of 37 Senate races and 29 out of 30 Governors’ races and nearly 98% of the House races. That’s pretty good.

Dan Zarrella, a self-styled social media scientist, counted the followers of the twitter feeds of 30 senate, house and governor races and found that in 71% of the races, the candidate with the most Twitter followers was ahead in the polls. And Facebook found that candidates with more Facebook fans than their opponents won 74% of House races, and 81% of Senate races. More than 12 million people used the “I Voted” button this year, more than double that in 2008.

Why is this interesting? Well, social media, it turns out, is quite a different beast to even recent phenomena such as blogs. Social media, it turns out, really is social, in that more than previous Internet methods of communication, it reflects the views of the people using it. It is, one might say, democratic.

A study by researchers from the Technical University of Munich of the 2009 federal parliamentary elections in Germany, for example, revealed that, in contrast to the bulletin boards and blogs of the past, Twitter was reflective of the way Germans voted. Unlike bulletin boards and blogs, they wrote, “heavy users were unable to impose their political sentiment on the discussion.” The large number of participants, they found, “make the information stream as a whole more representative of the electorate.”

In other words, social media is as much a battleground for hearts and minds as the rest of the world. Even more so, perhaps, because it’s easier to reach people. Forget knocking on doors or holding rallies: Just build a Facebook page or tweet.

And, maybe, hire some political operators to build a fake movement, aka astroturfing?

Astroturfing, for those not familiar with the term, is the opposite of grassroots. If you lack the support of ordinary people, or don’t have time to get it, you can still fake it. Just make it look like you’ve got grassroots support. Since the term was coined in the mid 1980s it’s become popular activity by marketers, political operators and governments (think Chinese 50-cent blogging army). Astroturfing, in short, allows a politician to seem a lot more popular than he really is by paying folk to say how great he is.

Whether social media is ripe for astroturfing isn’t clear. On one hand, we know that the Internet is full of fakery and flummery: Just because your inbox is no longer full of spam doesn’t mean the Internet isn’t full of it—87%, according to the latest figures from MessageLabs. You don’t see it because the filters are getting better at keeping it away from you. Twitter, by contrast, is much less spammy: the latest figures from Twitter suggest that after some tweaks earlier this year the percentage of unwanted messages on the service is about 1%.

So Twitter isn’t spammy, and it broadly reflects the electorate. But can it be gamed?

We already know that Twitter can spread an idea, or meme, rapidly—only four hops are needed before more or less everyone on Twitter sees it. In late 2009 Google unveiled a new product: Real time search. This meant that, atop the usual results to a search, Google would throw in the latest matches from the real time web—in other words, Twitter and its ilk. So getting your tweets up there would be valuable if, say, you were a political operator and you wanted people to hear good things about your candidate, or bad things about your rival. But were people doing this? Two researchers from Wellesley College in Massachusetts wondered.

Panagiotis Takis Metaxas and Eni Mustafaraj studied the local senate race and found that they were. They looked at 185,000 Twitter messages which mentioned the two competing candidates and found that there was plenty of astroturfing going on—where political supporters were creating fake accounts and repeating each other’s messages, and sending them to likely sympathizers, in the hope of their messages hitting the mainstream.

The researchers found one group, apparently linked to an Iowa Republican group, was sending out one tweet a second linking to websites “exposing” their rival’s missteps and misstatements. Overall, the message they sent reached more than 60,000 users. The researchers concluded that “the fact that a few minutes of work, using automated scripts and exploiting the open architecture of social networks such as twitter, makes possible reaching a large audience for free…raises concerns about the deliberate exploitation of the medium.”

The point here is not merely that you’re propagating a point of view. That’s just spam. But by setting up fake Twitter accounts and tweeting  and then repeating these messages, you’re creating the illusion that these views are widespread. We may ignore the first Twitter message we see exposing these views and linking to a website, but will we ignore the second or the third?

This discovery of Twitter astroturfing in one race has prompted researchers at Indiana University to set up a tool they call Truthy—after comedian Stephen Colbert’s term to describe something that someone knows intuitively from the gut—irrespective of evidence, logic or the facts. Their tool has exposed other similar attacks which, while not explosive in terms of growth, are, they wrote in an accompanying paper,  “nevertheless clear examples of coordinated attempts to deceive Twitter users.” And, they point out, the danger with these Twitter messages is that unless they’re caught early, “once one of these attempts is successful at gaining the attention of the community, it will quickly become indistinguishable from an organic meme.”

This is all interesting, for several reasons. First off, it’s only in the past few months that we’ve woken up to what political operators seem to be doing on Twitter. Secondly, while none of these cases achieves viral levels, the relative ease with which these campaigns can be launched suggests that a lot more people will try them out. Thirdly, what does this tell us about the future of political manipulation in social media?

I don’t know, but it’s naïve to think that this is just an American thing. Or a ‘what do you expect in a thriving democracy?’ thing. Less democratically minded organizations and governments are becoming increasingly sophisticated about the way they use the Internet to control and influence public opinion. Evgeny Morozov points to the Lebanon’s Hezbollah, “whose suave manipulation of cyberspace was on display during the 2006 war with Israel”; my journalist friends in Afghanistan say the Taliban are more sophisticated about using the Internet than the Karzai government or NATO.

The good news is that researchers are pushing Twitter to improve their spam catching tools to stop this kind of thing from getting out of hand. But I guess the bigger lesson is this: While social media is an unprecedented window on, and reflection of, the populace, it is also an unprecedented opportunity for shysters, snake oil salesmen and political operators to manipulate what we think we know.

It may be a great channel for the truth, but truthiness may also be one step behind.

MyDoom Anniversary: Another Big Attack In The Offing?

Today’s the first anniversary of the MyDoom.A worm. According to an email I received earlier today from MessageLabs, ‘the world’s leading provider of email security services to business’, it was a day that “changed the virus landscape forever”:

27 January 2005 – At 13.26pm on 26 January 2004, MessageLabs,  intercepted its first copy of W32/MyDoom.A. Within the first twenty-four hours, the company had stopped over 1.2 million copies. MyDoom.A, which achieved a peak infection rate of 1 in 12 emails, has proved to represent a landmark in the history of computer viruses, and the legacy lives on..

I’m not sure whether this is just a coincidence, but I’m told by folks at Network Box of a fresh attack by Bagle: “Depending on the next few hours, this could be a large attack,” says Network Box’s Quentin Heron:

Network Box Security Response is tracking several new variants of the Bagle Internet worm… We are seeing thousands of blocks on these variants, from dozens of sites in Hong Kong. We are checking worldwide infection rates at the moment, but this looks extensive.

For those of you who follow these things, the worm matches signatures from Kaspersky Labs of Email-Worm.Win32.Bagle.ax and Email-Worm.Win32.Bagle.ay.

I’ll keep you posted.

New Variation Of Bagle Spreading Fast

More virus trouble afoot. This time it’s a variation of Bagle.

MessageLabs reports that it’s intercepted more than 10,000 copies in an hour as of this morning. Most seem to be from the UK and the U.S, although the first copy it received was from Poland.

It appears to be a mass-mailing worm, installing a backdoor Trojan on infected machines much like its predecessor. It looks like this:

Subject: ID <random>… thanks
Text:  Unknown
Attachment: <Random>.exe
Size: 11264 bytes

EWeek says it also includes a component that notifies the author each time a new machine is infected. The attachment will mail the virus to all of the names found on the user’s hard drive, with the exception, for some reason, of addresses in the Hotmail, MSN, Microsoft and AVP domains.

Bagle.B also opens port 8866 and begins listening for remote connections, according to an analysis done by Network Associates Inc.’s McAfee AVERT team. The virus also sends an HTTP notification, presumably to the author, notifying him that the machine is infected.

The Next Step: Anti Phishing Services

MessageLabs, those hyperactive purveyors of Internet security, have come up with an anti-phishing service for banks and other targeted companies (Phishing is the scam whereby bogus emails entice you to give up your online banking password and other sensitive information), the first of its kind I do believe. It had been available to about 15 banks and is now available to everyone. 

 

The service involves “real-time scanning, expert analysis and authentication, incident response and early notification of suspicious email activity”.  The company uses Skeptic™ Radar (I’m not making this up) technology to scan millions of email messages to detect threats and anomalies. When a scam is identified, analysed and authenticated, the company notifies the targeted company and provides details of the attack. Companies are then able to work with law enforcement agencies to quickly and effectively shut down scammers. (It says here.)

 

MessageLabs says it has been able to alert “in-house IT staff to the problem before they knew of its existence”. In pilot cases it was able to close down fraudulent website within a couple of hours.

 

MessageLabs reckon about “20% of all recipients that receive phishing emails have been duped into providing user names, passwords and social security numbers”. That’s a very high figure; I’d heard 5%. I’ll try to find out where MessageLabs get it from.

The Bagle Worm

I’m getting quite a few warnings about a new worm called Bagle, so I thought I’d pass them along. MessageLabs, an email security company, says it’s currently spreading at an alarming rate. The first copy of the worm was intercepted from Germany, and at the moment the majority of copies are being captured as they are sent from Australia. It seems to have several bits to it:

The worm arrives as an attachment to an email with the subject line ‘Hi’ and has a random filename, with a .exe extension. W32/Bagle-mm searches the infected machine for email addresses and then uses its own SMTP engine to send itself to the addresses found. The worm makes a poor attempt to lure users into double-clicking on the attachment by using social engineering techniques.

Further analysis suggests that the worm includes a backdoor component that listens for connections from a malicious user and can send notification of an infected system.

It also appears that the worm may attempt to download a Trojan proxy component, known as Backdoor-CBJ. This Trojan is able to act as a proxy server and can download other code which could be used for key-logging and password stealing.

Here’s more on it from CNet.

Another Spamming Record

You’re probably getting bored of spam statistics by now, and I wouldn’t blame you. But here’s another milestone, courtesy of MessageLabs, who monitor this kind of thing: December was a new record, they say, for the ratio of spam to ordinary email. In that month, MessageLabs scanned some 463 million emails and found that 1 in every 1.6, or 62.7% of them, was spam. They don’t give a comparative figure, but their PR says that’s a new record.

Of course, it may just have been the holiday season, although spam this month shows no sign of easing up, either for that reason or for new laws. MessageLabs also do a breakdown by industry, to show which are most vulnerable to getting spam (useful, I guess, if you’re in those industries and you need to measure how big a problem it is for your staff). It turns out the public sector has the smallest problem — only 1 in every 3.65 emails your average civil servant gets is spam — whereas if you’re a healthcare worker, chances are that every 1 in 1.21 emails you get is junk. Go figure.

Here’s another weird statistic. MessageLabs also monitor viruses, and their figures seem to show that, depending on what country and sector you’re in, your chances of a getting an email vary wildly. In U.S. real estate? Relax, only 1 in 439 emails is going to be a virus. In the UK leisure and recreation industry? The likelihood rises to 1 in less than 50. Why would that be?

Happy Birthday, SoBig

A press release from email security folks MessageLabs points out that tomorrow is the first anniversary of the SoBig.A worm’s debut. SoBig.A (the A bit means it was the first of a stream of worms that were somehow based on the SoBig worm) wasn’t just any kind of worm, MessageLabs point out. SoBig.A was unique in being the first virus to use convergence techniques to create maximum havoc.

Basically this means SoBig.A didn’t just do one thing. It incorporated both spamming and virus writing techniques — infecting hundreds of thousands of computers worldwide, installing open proxies on compromised machines, which were then used to disseminate spam — unknown to the users. To date, MessageLabs has intercepted 727,102 copies of the worm in 183 countries, and it continues to spread.

SoBig was so successful it’s now into version F, the most prolific virus to date. The SoBig family, MessageLabs say, has also served as the model for other viruses using convergence techniques, such as the Fizzer worm. MessageLabs predicts that this style of virus writing will be extensive during 2004.

Needless to say, this all helps blur the boundary between spammers, scammers, virus writers (and, probably, the Mob). Says David Banes, MessageLabs’ Technical Director Asia Pacific: “The success of SoBig has served as an inspiration to cyber criminals, and demonstrates what can be achieved when they work together.”

2003, Year of the Spiral of Evil? Or Just The Start?

MessageLabs, who track this sort of thing, say that spam and viruses hit all time highs in 2003. Not surprising, but the figures are pretty shocking, revealing the symbiotic relationship between spam and viruses — what I called in a recent WSJ/FEER column The Spiral Of Evil (no, it doesn’t seem to have caught on). Here are the figures:

— Two-thirds of all spam coming from open proxies created by viruses
— Ratio of spam to email is 1 in 2.5 – up 77 per cent in 12 months
— Ratio of virus to email now 1 in 33 – up 84 per cent

Basically, this means that virus writers are hijacking innocent computers and turning them into open proxies — a sort of free sorting office for spam, churning it all and in the process hiding the original sender from anti-spammers.

Here’s the link: Highlights of 2003 include Sobig.F breaking the world record in August to become the fastest spreading virus ever with one million copies stopped in a day by MessageLabs. MessageLabs also reckon that 66% of spam was coming from computers infected by viruses such as Sobig.F. At its peak, 1 in every 17 emails stopped by MessageLabs contained a copy of the SoBig.F. By December 1, more than 32 million emails containing the virus had been stopped by MessageLabs, putting Sobig.F at head of the Top 10 Viruses List for 2003.

Update: Sobig Is Back

 Just when you thought it was safe to disable the antivirus software. MessageLabs reports of a fast spreading mass-mailing virus it’s calling W32/Sobig.F-mm.  The initial copies all originated from the United States.
 
Sobig.F appears to be polymorphic in nature and the email from: address is also spoofed and may not indicate the true identity of the sender.  It may carry the subject line ‘Re: Details’ and say ’Please see the attached file for details.’ in the text.
 
Attachment names may include: your_document.pif, details.pif, your_details.pif, thank_you.pif,  movie0045.pif, document_Fall.pif, application.pif, document_9446.pif. Watch out. It’s moving rapidly, a bit like babies across the floor.