Tag Archives: North Atlantic Treaty Organization

Social Media and Politics: Truthiness and Astroturfing

By Jeremy Wagstaff

(this is a column I wrote back in November. I’m repeating it here because of connections to astroturing in the HBGary/Anonymous case.)

Just how social is social media? By which I mean: Can we trust it as a measure of what people think, what they may buy, how they may vote? Or is it as easy a place to manipulate as the real world.

The answers to these questions aren’t of academic interest only. They go right to the heart of what may be our future. More and more of our world is online. And more and more of our online world is social media: A quarter of web pages viewed in the U.S. are on Facebook. So it’s not been lost on those who care about such things that a) what we say online may add up to be a useful predictor of what we may do at the shops, the movies, at the polling booth. And b) that social media is a worthwhile place to try to manipulate what we think, and what we do at the shops, the movies—and at the ballot box.

There is plenty of evidence supporting the former. Counting the number of followers a candidate has on Facebook, for example, is apparently a pretty good indicator of whether they’ll do well at the ballot box. The Daily Beast set up something called the Oracle which scanned 40,000 websites—including Twitter—to measure whether comments on candidates in the recent U.S. elections were positive, negative, neutral or mixed. It predicted 36 out of 37 Senate races and 29 out of 30 Governors’ races and nearly 98% of the House races. That’s pretty good.

Dan Zarrella, a self-styled social media scientist, counted the followers of the twitter feeds of 30 senate, house and governor races and found that in 71% of the races, the candidate with the most Twitter followers was ahead in the polls. And Facebook found that candidates with more Facebook fans than their opponents won 74% of House races, and 81% of Senate races. More than 12 million people used the “I Voted” button this year, more than double that in 2008.

Why is this interesting? Well, social media, it turns out, is quite a different beast to even recent phenomena such as blogs. Social media, it turns out, really is social, in that more than previous Internet methods of communication, it reflects the views of the people using it. It is, one might say, democratic.

A study by researchers from the Technical University of Munich of the 2009 federal parliamentary elections in Germany, for example, revealed that, in contrast to the bulletin boards and blogs of the past, Twitter was reflective of the way Germans voted. Unlike bulletin boards and blogs, they wrote, “heavy users were unable to impose their political sentiment on the discussion.” The large number of participants, they found, “make the information stream as a whole more representative of the electorate.”

In other words, social media is as much a battleground for hearts and minds as the rest of the world. Even more so, perhaps, because it’s easier to reach people. Forget knocking on doors or holding rallies: Just build a Facebook page or tweet.

And, maybe, hire some political operators to build a fake movement, aka astroturfing?

Astroturfing, for those not familiar with the term, is the opposite of grassroots. If you lack the support of ordinary people, or don’t have time to get it, you can still fake it. Just make it look like you’ve got grassroots support. Since the term was coined in the mid 1980s it’s become popular activity by marketers, political operators and governments (think Chinese 50-cent blogging army). Astroturfing, in short, allows a politician to seem a lot more popular than he really is by paying folk to say how great he is.

Whether social media is ripe for astroturfing isn’t clear. On one hand, we know that the Internet is full of fakery and flummery: Just because your inbox is no longer full of spam doesn’t mean the Internet isn’t full of it—87%, according to the latest figures from MessageLabs. You don’t see it because the filters are getting better at keeping it away from you. Twitter, by contrast, is much less spammy: the latest figures from Twitter suggest that after some tweaks earlier this year the percentage of unwanted messages on the service is about 1%.

So Twitter isn’t spammy, and it broadly reflects the electorate. But can it be gamed?

We already know that Twitter can spread an idea, or meme, rapidly—only four hops are needed before more or less everyone on Twitter sees it. In late 2009 Google unveiled a new product: Real time search. This meant that, atop the usual results to a search, Google would throw in the latest matches from the real time web—in other words, Twitter and its ilk. So getting your tweets up there would be valuable if, say, you were a political operator and you wanted people to hear good things about your candidate, or bad things about your rival. But were people doing this? Two researchers from Wellesley College in Massachusetts wondered.

Panagiotis Takis Metaxas and Eni Mustafaraj studied the local senate race and found that they were. They looked at 185,000 Twitter messages which mentioned the two competing candidates and found that there was plenty of astroturfing going on—where political supporters were creating fake accounts and repeating each other’s messages, and sending them to likely sympathizers, in the hope of their messages hitting the mainstream.

The researchers found one group, apparently linked to an Iowa Republican group, was sending out one tweet a second linking to websites “exposing” their rival’s missteps and misstatements. Overall, the message they sent reached more than 60,000 users. The researchers concluded that “the fact that a few minutes of work, using automated scripts and exploiting the open architecture of social networks such as twitter, makes possible reaching a large audience for free…raises concerns about the deliberate exploitation of the medium.”

The point here is not merely that you’re propagating a point of view. That’s just spam. But by setting up fake Twitter accounts and tweeting  and then repeating these messages, you’re creating the illusion that these views are widespread. We may ignore the first Twitter message we see exposing these views and linking to a website, but will we ignore the second or the third?

This discovery of Twitter astroturfing in one race has prompted researchers at Indiana University to set up a tool they call Truthy—after comedian Stephen Colbert’s term to describe something that someone knows intuitively from the gut—irrespective of evidence, logic or the facts. Their tool has exposed other similar attacks which, while not explosive in terms of growth, are, they wrote in an accompanying paper,  “nevertheless clear examples of coordinated attempts to deceive Twitter users.” And, they point out, the danger with these Twitter messages is that unless they’re caught early, “once one of these attempts is successful at gaining the attention of the community, it will quickly become indistinguishable from an organic meme.”

This is all interesting, for several reasons. First off, it’s only in the past few months that we’ve woken up to what political operators seem to be doing on Twitter. Secondly, while none of these cases achieves viral levels, the relative ease with which these campaigns can be launched suggests that a lot more people will try them out. Thirdly, what does this tell us about the future of political manipulation in social media?

I don’t know, but it’s naïve to think that this is just an American thing. Or a ‘what do you expect in a thriving democracy?’ thing. Less democratically minded organizations and governments are becoming increasingly sophisticated about the way they use the Internet to control and influence public opinion. Evgeny Morozov points to the Lebanon’s Hezbollah, “whose suave manipulation of cyberspace was on display during the 2006 war with Israel”; my journalist friends in Afghanistan say the Taliban are more sophisticated about using the Internet than the Karzai government or NATO.

The good news is that researchers are pushing Twitter to improve their spam catching tools to stop this kind of thing from getting out of hand. But I guess the bigger lesson is this: While social media is an unprecedented window on, and reflection of, the populace, it is also an unprecedented opportunity for shysters, snake oil salesmen and political operators to manipulate what we think we know.

It may be a great channel for the truth, but truthiness may also be one step behind.

Data, WikiLeaks and War

I’m not going to get into the rights and wrongs of the WikiLeaks thing. Nor am I going to look at the bigger implications for the balance of power between governed and governing, and between the U.S. and its allies and foes. Others have written much better than I can on these topics.

I want to look at what the cables tell us about the sorting, sifting and accessing of this information. In short, what does this tell us about how the world’s most powerful nation organized some of its most prized data?

To start, with, I want to revisit a conversation I had sitting in the garden of a Kabul pub called the Gandermack a few weeks back when it struck me: the biggest problem facing NATO in winning the war in Afghanistan is data.

I was talking to a buff security guy—very buff, in fact, as my female companions kept remarking—who was what might have once been a rare breed, but are now in big demand in Afghanistan. He was a former marine (I think), but was also a computer guy with an anthropology or sociology degree under his black belt somewhere. This guy knew his stuff.

And he was telling the NATO forces where they were going wrong: data management.

The problem, he explained, is not that there isn’t enough of it. It’s that there’s too much of it, and it’s not being shared in a useful way. Connections are not being made. Soldiers are drowning in intelligence.

All the allied forces in Afghanistan have their own data systems. But, I was told, there’s no system to make sense of it. Nor is there one to share it. So data collected by a garrison from one country in one part of the country is not accessible by any of the other 48 nations.

On the surface it seems this problem was fixed. In the wake of 9/11 U.S. departments were told to stop being so secretive. Which is why we got to WikiLeaks–one guy apparently able to access millions of classified documents from pretty much every corner of the planet. If he could do then so could thousands of other people. And, one would have to assume, so could more than a few people who weren’t supposed to have access. To give you an idea of the trove unearthed, WikiLeaks has released about 1,000 so far, meaning it’s going to take them nearly seven years to get all the cables out. Cable fatigue, anyone?

So, it would seem that the solution to the problem of not having enough pooled information is to just let anyone have it. But that, it turns out, isn’t enough. That’s because what we see from the WikiLeaks material is how old it looks.

I spent much of the early 1980s trawling through this kind of thing as a history student. Of course, they were all declassified documents going back to the 1950s, but the language was remarkably similar, the structure, the tone, the topics, the look and feel. A diplomatic cable in 2010 looks a lot like a cable from 50 years ago. In the meantime communication has gone from the telegraph to the fax to email to blogs to the iphone to twitter to Facebook.

This, to me, is the problem. It’s not that we’ve suddenly glimpsed inside another world: We would have seen a lot of this stuff at some point anyway, though it’s useful to see it earlier. Actually we can take some succour from the fact that diplomats seem to be doing a pretty good job of reporting on the countries they’re posted to. Journalists shouldn’t be surprised; we’ve relied on diplomats for a while. (And they might rightly feel somewhat aggrieved we now do this to them.)

No, the problem that WikiLeaks unearths is that the most powerful nation on earth doesn’t seem to have any better way of working with all this information than anyone else. Each cable has some header material—who it’s intended for, who it’s by, and when it was written. Then there’s a line called TAGS, which, in true U.S. bureaucratic style doesn’t actually mean tags but “Traffic Analysis by Geography and Subject”—a state department system to organize and manage the cables. Many are two letter country or regional tags—US, AF, PK etc—while others are four letter subject tags—from AADP for Automated Data Processing to PREL for external political relations, or SMIG for immigration related terms.

Of course there’s nothing wrong with this—the tag list is updated regularly (that last one seems to be in January 2008). You can filter a search by, say, a combination of countries, a subject tag and then what’s called a program tag, which always begins with K, such as KPAO for Public Affairs Office.

This is all very well, but it’s very dark ages. The trouble is, as my buff friend in the Kabul garden points out, there’s not much out there that’s better. A CIA or State Department analyst may use a computer to sift through the tags and other metadata, but that seems to be the only real difference between him and his Mum or Dad 50 years before.

My buff friend made a comparison with the political officer in today’s ISAF with a political officer (sometimes called an agent) back in the days of the British Raj. Back then the swashbuckling fella would ride a horse, sleep on the ground and know the Afghan hinterlands like the back of his hand, often riding alone, sipping tea with local chieftains to collect intelligence and use it to effect change (in this case meaning extend the already bulging British sphere of influence.) He would know the ins and outs of local tribal rivalries, who hated whom, etc. All of it stored in his head or in little notebooks.

His modern equivalent may actually have the same information, but it’ll be gleaned from the occasional photo opportunity, a squillion intelligence reports, all suitably tagged, and perhaps footage from a couple of drones. If the chieftain he’s interested in coopting straddles a regional command, chances are that he won’t be able to access anyone else’s information on him–assuming they have any.

In short, the problem in the military and diplomatic world is the same we’re facing in the open world. We have a lot more information than we can use—or keep track of—and it’s not necessarily making us any smarter. Computers haven’t helped us understand stuff better—they’ve just helped us collect, share, and lose more of it.

I must confess I’ve not made much progress on this myself. My main contribution is persuading a researcher friend to use a program called PersonalBrain, which helps you to join the dots between people, things, organisations, whatever you’re trying to figure out. It’s all manual though, which puts people off: What you mean I have to make the connections myself? Well, yes. Computers aren’t magic.

Yet. It’s clear to me that 10 years down the track, I hope, we’ll finally get that writing in prose, and then adding a hierarchy of labels to a document, is no longer the way to go. Instead, we’ll be writing into live forms that make connections as we write, annotate on the fly, draw spindly threads to other parts of our text, and make everything come to life. I will be able to pull into the document visuals, audio, other people, old records, chronologies, maps, and work with the data in three dimensions.

If this sounds familiar, it’s probably because it sounds like science fiction, something like Minority Report. But it’s not; it’s a glimpse inside the mind of our imperial political agent; how he would make those connections because they were all in his head—neurons firing transmitters, axons alive, binding synapses.

If I were the U.S. government, I would take Cablegate as a wake up call. Not at the affrontery of this humiliation, but as a chance to rethink how its data is being gathered and made use of. Cablegate tells us that the world of the cable is over.

Social Media and Politics: Truthiness and Astroturfing

(This is a longer version of my syndicated newspaper column)

By Jeremy Wagstaff

Just how social is social media? By which I mean: Can we trust it as a measure of what people think, what they may buy, how they may vote? Or is it as easy a place to manipulate as the real world?

The answers to these questions aren’t of academic interest only. They go right to the heart of what may be our future. More and more of our world is online. And more and more of our online world is social media: A quarter of web pages viewed in the U.S. are on Facebook. So it’s not been lost on those who care about such things that a) what we say online may add up to be a useful predictor of what we may do at the shops, the movies, at the polling booth. And b) that social media is a worthwhile place to try to manipulate what we think, and what we do at the shops, the movies—and at the ballot box.

There is plenty of evidence supporting the former. Counting the number of followers a candidate has on Facebook, for example, is apparently a pretty good indicator of whether they’ll do well at the ballot box. The Daily Beast set up something called the Oracle which scanned 40,000 websites—including Twitter—to measure whether comments on candidates in the recent U.S. elections were positive, negative, neutral or mixed. It predicted 36 out of 37 Senate races and 29 out of 30 Governors’ races and nearly 98% of the House races. That’s pretty good.

Dan Zarrella, a self-styled social media scientist, counted the followers of the twitter feeds of 30 senate, house and governor races and found that in 71% of the races, the candidate with the most Twitter followers was ahead in the polls. And Facebook found that candidates with more Facebook fans than their opponents won 74% of House races, and 81% of Senate races. More than 12 million people used the “I Voted” button this year, more than double that in 2008.

Why is this interesting? Well, social media, it turns out, is quite a different beast to even recent phenomena such as blogs. Social media, it turns out, really is social, in that more than previous Internet methods of communication, it reflects the views of the people using it. It is, one might say, democratic.

A study by researchers from the Technical University of Munich of the 2009 federal parliamentary elections in Germany, for example, revealed that, in contrast to the bulletin boards and blogs of the past, Twitter was reflective of the way Germans voted. Unlike bulletin boards and blogs, they wrote, “heavy users were unable to impose their political sentiment on the discussion.” The large number of participants, they found, “make the information stream as a whole more representative of the electorate.”

In other words, social media is as much a battleground for hearts and minds as the rest of the world. Even more so, perhaps, because it’s easier to reach people. Forget knocking on doors or holding rallies: Just build a Facebook page or tweet.

And, maybe, hire some political operators to build a fake movement, aka astroturfing?

Astroturfing, for those not familiar with the term, is the opposite of grassroots. If you lack the support of ordinary people, or don’t have time to get it, you can still fake it. Just make it look like you’ve got grassroots support. Since the term was coined in the mid 1980s it’s become popular activity by marketers, political operators and governments (think Chinese 50-cent blogging army). Astroturfing, in short, allows a politician to seem a lot more popular than he really is by paying folk to say how great he is.

Whether social media is ripe for astroturfing isn’t clear. On one hand, we know that the Internet is full of fakery and flummery: Just because your inbox is no longer full of spam doesn’t mean the Internet isn’t full of it—87%, according to the latest figures from MessageLabs. You don’t see it because the filters are getting better at keeping it away from you. Twitter, by contrast, is much less spammy: the latest figures from Twitter suggest that after some tweaks earlier this year the percentage of unwanted messages on the service is about 1%.

So Twitter isn’t spammy, and it broadly reflects the electorate. But can it be gamed?

We already know that Twitter can spread an idea, or meme, rapidly—only four hops are needed before more or less everyone on Twitter sees it. In late 2009 Google unveiled a new product: Real time search. This meant that, atop the usual results to a search, Google would throw in the latest matches from the real time web—in other words, Twitter and its ilk. So getting your tweets up there would be valuable if, say, you were a political operator and you wanted people to hear good things about your candidate, or bad things about your rival. But were people doing this? Two researchers from Wellesley College in Massachusetts wondered.

Panagiotis Takis Metaxas and Eni Mustafaraj studied the local senate race and found that they were. They looked at 185,000 Twitter messages which mentioned the two competing candidates and found that there was plenty of astroturfing going on—where political supporters were creating fake accounts and repeating each other’s messages, and sending them to likely sympathizers, in the hope of their messages hitting the mainstream.

The researchers found one group, apparently linked to an Iowa Republican group, was sending out one tweet a second linking to websites “exposing” their rival’s missteps and misstatements. Overall, the message they sent reached more than 60,000 users. The researchers concluded that “the fact that a few minutes of work, using automated scripts and exploiting the open architecture of social networks such as twitter, makes possible reaching a large audience for free…raises concerns about the deliberate exploitation of the medium.”

The point here is not merely that you’re propagating a point of view. That’s just spam. But by setting up fake Twitter accounts and tweeting  and then repeating these messages, you’re creating the illusion that these views are widespread. We may ignore the first Twitter message we see exposing these views and linking to a website, but will we ignore the second or the third?

This discovery of Twitter astroturfing in one race has prompted researchers at Indiana University to set up a tool they call Truthy—after comedian Stephen Colbert’s term to describe something that someone knows intuitively from the gut—irrespective of evidence, logic or the facts. Their tool has exposed other similar attacks which, while not explosive in terms of growth, are, they wrote in an accompanying paper,  “nevertheless clear examples of coordinated attempts to deceive Twitter users.” And, they point out, the danger with these Twitter messages is that unless they’re caught early, “once one of these attempts is successful at gaining the attention of the community, it will quickly become indistinguishable from an organic meme.”

This is all interesting, for several reasons. First off, it’s only in the past few months that we’ve woken up to what political operators seem to be doing on Twitter. Secondly, while none of these cases achieves viral levels, the relative ease with which these campaigns can be launched suggests that a lot more people will try them out. Thirdly, what does this tell us about the future of political manipulation in social media?

I don’t know, but it’s naïve to think that this is just an American thing. Or a ‘what do you expect in a thriving democracy?’ thing. Less democratically minded organizations and governments are becoming increasingly sophisticated about the way they use the Internet to control and influence public opinion. Evgeny Morozov points to the Lebanon’s Hezbollah, “whose suave manipulation of cyberspace was on display during the 2006 war with Israel”; my journalist friends in Afghanistan say the Taliban are more sophisticated about using the Internet than the Karzai government or NATO.

The good news is that researchers are pushing Twitter to improve their spam catching tools to stop this kind of thing from getting out of hand. But I guess the bigger lesson is this: While social media is an unprecedented window on, and reflection of, the populace, it is also an unprecedented opportunity for shysters, snake oil salesmen and political operators to manipulate what we think we know.

It may be a great channel for the truth, but truthiness may also be one step behind.

Cyberwar, Or Just a Taste?

Some interesting detail on the Estonian Cyberwar. This ain’t just any old attack. According to Jose Nazario, who works at ARBOR SERT, the attacks peaked a week ago, but aren’t over:

As for how long the attacks have lasted, quite a number of them last under an hour. However, when you think about how many attacks have occurred for some of the targets, this translates into a very long-lived attack. The longest attacks themselves were over 10 and a half hours long sustained, dealing a truly crushing blow to the endpoints.

There’s some older stuff here, from F-Secure, which shows that it’s not (just) a government initiative. And Dr Mils Hills, who works at the Civil Contingencies Secretariat of the UK’s Cabinet Office (a department of government responsible for supporting the prime minister and cabinet), feels that cyberwar may be too strong a term for something that he prefers to label ‘cyber anti-social behaviour’.

Indeed, what surprises him is that such a technologically advanced state — which uses electronic voting, ID cards and laptop-centric cabinet meetings — could so easily be hobbled by such a primitive form of attack, and what implications that holds:

What IS amazing is that a country so advanced in e-government and on-line commercial services has been so easily disrupted. What more sophisticated and painful things might also have already been done? What else does this indicate about e-security across (i) the accession countries to the EU; (ii) NATO and, of course, the EU itself?

Definitely true that this is probably just a little blip on the screen of what is possible, and what governments are capable of doing.

(Definition of Cyberwar from Wikipedia here.)

 

Russia Declares Cyberwar?

The Guardian reports on what some are suggesting may the first outbreak of official cyberwar between one country and another, after Russian hackers, official or not, have flooded Estonian websites with Denial of Service attacks (DDoS):

clipped from www.guardian.co.uk

Without naming Russia, the Nato official said: “I won’t point fingers. But these were not things done by a few individuals.

 

“This clearly bore the hallmarks of something concerted. The Estonians are not alone with this problem. It really is a serious issue for the alliance as a whole.”