Tag Archives: Lebanon

Real Phone Hacking

Interesting glimpse into the real world of phone hacking–not the amateurish stuff we’ve been absored by in the UK–by Sharmine Narwani: In Lebanon, The Plot Thickens « Mideast Shuffle.

First off, there’s the indictment just released by the Special Tribunal for Lebanon which, in the words of Narwani,

appears to be built on a simple premise: the “co-location” of cellular phones — traceable to the accused four — that coincide heavily with Hariri’s whereabouts and crucial parts of the murder plot in the six weeks prior to his death.

Indeed, the case relies heavily on Call Data Record (CDR) analysis. Which sounds kind of sophisticated. Or is it? Narwani contends that this could have been manufactured. Indeed, she says,

there isn’t a literate soul in Lebanon who does not know that the country’s telecommunications networks are highly infiltrated — whether by competing domestic political operatives or by foreign entities.

There is plenty of evidence to support this. The ITU recently issued two resolutions [PDF] basically calling on Israel to stop conducting “piracy, interference and disruption, and sedition”.

And Lebanon has arrested at least two men accused of helping Israel infiltrate the country’s cellular networks. What’s interesting about this from a data war point of view is that one of those arrested has confessed, according to Narwani, to lobbying for the cellular operator he worked for not to install more secure hardware, made by Huawei, which would have presumably made eavesdropping harder. (A Chinese company the good guy? Go figure.)

If this were the case–if Lebanon’s cellular networks were so deeply penetrated–then it’s evidence of the kind of cyberwar we’re not really equipped to understand, let alone deal with: namely data manipulation.

Narwani asks whether it could be possible that the tribunal has actually been hoodwinked by a clever setup: that all the cellular data was faked, when

a conspiring “entity” had to obtain the deepest access into Lebanese telecommunications networks at one or — more likely — several points along the data logging trail of a mobile phone call. They would have to be able to intercept data and alter or forge it, and then, importantly, remove all traces of the intervention.

After all, she says,

the fact is that Hezbollah is an early adherent to the concept of cyberwarfare. The resistance group have built their own nationwide fiber optics network to block enemy eavesdropping, and have demonstrated their own ability to intercept covert Israeli data communications. To imagine that they then used traceable mobile phones to execute the murder of the century is a real stretch.

Who knows? But Darwani asserts that

Nobody doubts Israel’s capacity to carry out this telecom sleight of hand — technology warfare is an entrenched part of the nation’s military strategies. This task would lie somewhere between the relatively facile telephone hacking of the News of the World reporters and the infinitely more complex Stuxnet attack on Iran’s nuclear facilities, in which Israel is a prime suspect.

In other words, there’s something going on here that is probably a lot more sophisticated than a tribunal can get behind. I’m no Mideast expert, but if only half of this is true it’s clear that cellphones are the weakest link in a communications chain. And that if this kind of thing is going on Lebanon, one has to assume that it’s going on in a lot of places.

Social Media and Politics: Truthiness and Astroturfing

By Jeremy Wagstaff

(this is a column I wrote back in November. I’m repeating it here because of connections to astroturing in the HBGary/Anonymous case.)

Just how social is social media? By which I mean: Can we trust it as a measure of what people think, what they may buy, how they may vote? Or is it as easy a place to manipulate as the real world.

The answers to these questions aren’t of academic interest only. They go right to the heart of what may be our future. More and more of our world is online. And more and more of our online world is social media: A quarter of web pages viewed in the U.S. are on Facebook. So it’s not been lost on those who care about such things that a) what we say online may add up to be a useful predictor of what we may do at the shops, the movies, at the polling booth. And b) that social media is a worthwhile place to try to manipulate what we think, and what we do at the shops, the movies—and at the ballot box.

There is plenty of evidence supporting the former. Counting the number of followers a candidate has on Facebook, for example, is apparently a pretty good indicator of whether they’ll do well at the ballot box. The Daily Beast set up something called the Oracle which scanned 40,000 websites—including Twitter—to measure whether comments on candidates in the recent U.S. elections were positive, negative, neutral or mixed. It predicted 36 out of 37 Senate races and 29 out of 30 Governors’ races and nearly 98% of the House races. That’s pretty good.

Dan Zarrella, a self-styled social media scientist, counted the followers of the twitter feeds of 30 senate, house and governor races and found that in 71% of the races, the candidate with the most Twitter followers was ahead in the polls. And Facebook found that candidates with more Facebook fans than their opponents won 74% of House races, and 81% of Senate races. More than 12 million people used the “I Voted” button this year, more than double that in 2008.

Why is this interesting? Well, social media, it turns out, is quite a different beast to even recent phenomena such as blogs. Social media, it turns out, really is social, in that more than previous Internet methods of communication, it reflects the views of the people using it. It is, one might say, democratic.

A study by researchers from the Technical University of Munich of the 2009 federal parliamentary elections in Germany, for example, revealed that, in contrast to the bulletin boards and blogs of the past, Twitter was reflective of the way Germans voted. Unlike bulletin boards and blogs, they wrote, “heavy users were unable to impose their political sentiment on the discussion.” The large number of participants, they found, “make the information stream as a whole more representative of the electorate.”

In other words, social media is as much a battleground for hearts and minds as the rest of the world. Even more so, perhaps, because it’s easier to reach people. Forget knocking on doors or holding rallies: Just build a Facebook page or tweet.

And, maybe, hire some political operators to build a fake movement, aka astroturfing?

Astroturfing, for those not familiar with the term, is the opposite of grassroots. If you lack the support of ordinary people, or don’t have time to get it, you can still fake it. Just make it look like you’ve got grassroots support. Since the term was coined in the mid 1980s it’s become popular activity by marketers, political operators and governments (think Chinese 50-cent blogging army). Astroturfing, in short, allows a politician to seem a lot more popular than he really is by paying folk to say how great he is.

Whether social media is ripe for astroturfing isn’t clear. On one hand, we know that the Internet is full of fakery and flummery: Just because your inbox is no longer full of spam doesn’t mean the Internet isn’t full of it—87%, according to the latest figures from MessageLabs. You don’t see it because the filters are getting better at keeping it away from you. Twitter, by contrast, is much less spammy: the latest figures from Twitter suggest that after some tweaks earlier this year the percentage of unwanted messages on the service is about 1%.

So Twitter isn’t spammy, and it broadly reflects the electorate. But can it be gamed?

We already know that Twitter can spread an idea, or meme, rapidly—only four hops are needed before more or less everyone on Twitter sees it. In late 2009 Google unveiled a new product: Real time search. This meant that, atop the usual results to a search, Google would throw in the latest matches from the real time web—in other words, Twitter and its ilk. So getting your tweets up there would be valuable if, say, you were a political operator and you wanted people to hear good things about your candidate, or bad things about your rival. But were people doing this? Two researchers from Wellesley College in Massachusetts wondered.

Panagiotis Takis Metaxas and Eni Mustafaraj studied the local senate race and found that they were. They looked at 185,000 Twitter messages which mentioned the two competing candidates and found that there was plenty of astroturfing going on—where political supporters were creating fake accounts and repeating each other’s messages, and sending them to likely sympathizers, in the hope of their messages hitting the mainstream.

The researchers found one group, apparently linked to an Iowa Republican group, was sending out one tweet a second linking to websites “exposing” their rival’s missteps and misstatements. Overall, the message they sent reached more than 60,000 users. The researchers concluded that “the fact that a few minutes of work, using automated scripts and exploiting the open architecture of social networks such as twitter, makes possible reaching a large audience for free…raises concerns about the deliberate exploitation of the medium.”

The point here is not merely that you’re propagating a point of view. That’s just spam. But by setting up fake Twitter accounts and tweeting  and then repeating these messages, you’re creating the illusion that these views are widespread. We may ignore the first Twitter message we see exposing these views and linking to a website, but will we ignore the second or the third?

This discovery of Twitter astroturfing in one race has prompted researchers at Indiana University to set up a tool they call Truthy—after comedian Stephen Colbert’s term to describe something that someone knows intuitively from the gut—irrespective of evidence, logic or the facts. Their tool has exposed other similar attacks which, while not explosive in terms of growth, are, they wrote in an accompanying paper,  “nevertheless clear examples of coordinated attempts to deceive Twitter users.” And, they point out, the danger with these Twitter messages is that unless they’re caught early, “once one of these attempts is successful at gaining the attention of the community, it will quickly become indistinguishable from an organic meme.”

This is all interesting, for several reasons. First off, it’s only in the past few months that we’ve woken up to what political operators seem to be doing on Twitter. Secondly, while none of these cases achieves viral levels, the relative ease with which these campaigns can be launched suggests that a lot more people will try them out. Thirdly, what does this tell us about the future of political manipulation in social media?

I don’t know, but it’s naïve to think that this is just an American thing. Or a ‘what do you expect in a thriving democracy?’ thing. Less democratically minded organizations and governments are becoming increasingly sophisticated about the way they use the Internet to control and influence public opinion. Evgeny Morozov points to the Lebanon’s Hezbollah, “whose suave manipulation of cyberspace was on display during the 2006 war with Israel”; my journalist friends in Afghanistan say the Taliban are more sophisticated about using the Internet than the Karzai government or NATO.

The good news is that researchers are pushing Twitter to improve their spam catching tools to stop this kind of thing from getting out of hand. But I guess the bigger lesson is this: While social media is an unprecedented window on, and reflection of, the populace, it is also an unprecedented opportunity for shysters, snake oil salesmen and political operators to manipulate what we think we know.

It may be a great channel for the truth, but truthiness may also be one step behind.

Social Media and Politics: Truthiness and Astroturfing

(This is a longer version of my syndicated newspaper column)

By Jeremy Wagstaff

Just how social is social media? By which I mean: Can we trust it as a measure of what people think, what they may buy, how they may vote? Or is it as easy a place to manipulate as the real world?

The answers to these questions aren’t of academic interest only. They go right to the heart of what may be our future. More and more of our world is online. And more and more of our online world is social media: A quarter of web pages viewed in the U.S. are on Facebook. So it’s not been lost on those who care about such things that a) what we say online may add up to be a useful predictor of what we may do at the shops, the movies, at the polling booth. And b) that social media is a worthwhile place to try to manipulate what we think, and what we do at the shops, the movies—and at the ballot box.

There is plenty of evidence supporting the former. Counting the number of followers a candidate has on Facebook, for example, is apparently a pretty good indicator of whether they’ll do well at the ballot box. The Daily Beast set up something called the Oracle which scanned 40,000 websites—including Twitter—to measure whether comments on candidates in the recent U.S. elections were positive, negative, neutral or mixed. It predicted 36 out of 37 Senate races and 29 out of 30 Governors’ races and nearly 98% of the House races. That’s pretty good.

Dan Zarrella, a self-styled social media scientist, counted the followers of the twitter feeds of 30 senate, house and governor races and found that in 71% of the races, the candidate with the most Twitter followers was ahead in the polls. And Facebook found that candidates with more Facebook fans than their opponents won 74% of House races, and 81% of Senate races. More than 12 million people used the “I Voted” button this year, more than double that in 2008.

Why is this interesting? Well, social media, it turns out, is quite a different beast to even recent phenomena such as blogs. Social media, it turns out, really is social, in that more than previous Internet methods of communication, it reflects the views of the people using it. It is, one might say, democratic.

A study by researchers from the Technical University of Munich of the 2009 federal parliamentary elections in Germany, for example, revealed that, in contrast to the bulletin boards and blogs of the past, Twitter was reflective of the way Germans voted. Unlike bulletin boards and blogs, they wrote, “heavy users were unable to impose their political sentiment on the discussion.” The large number of participants, they found, “make the information stream as a whole more representative of the electorate.”

In other words, social media is as much a battleground for hearts and minds as the rest of the world. Even more so, perhaps, because it’s easier to reach people. Forget knocking on doors or holding rallies: Just build a Facebook page or tweet.

And, maybe, hire some political operators to build a fake movement, aka astroturfing?

Astroturfing, for those not familiar with the term, is the opposite of grassroots. If you lack the support of ordinary people, or don’t have time to get it, you can still fake it. Just make it look like you’ve got grassroots support. Since the term was coined in the mid 1980s it’s become popular activity by marketers, political operators and governments (think Chinese 50-cent blogging army). Astroturfing, in short, allows a politician to seem a lot more popular than he really is by paying folk to say how great he is.

Whether social media is ripe for astroturfing isn’t clear. On one hand, we know that the Internet is full of fakery and flummery: Just because your inbox is no longer full of spam doesn’t mean the Internet isn’t full of it—87%, according to the latest figures from MessageLabs. You don’t see it because the filters are getting better at keeping it away from you. Twitter, by contrast, is much less spammy: the latest figures from Twitter suggest that after some tweaks earlier this year the percentage of unwanted messages on the service is about 1%.

So Twitter isn’t spammy, and it broadly reflects the electorate. But can it be gamed?

We already know that Twitter can spread an idea, or meme, rapidly—only four hops are needed before more or less everyone on Twitter sees it. In late 2009 Google unveiled a new product: Real time search. This meant that, atop the usual results to a search, Google would throw in the latest matches from the real time web—in other words, Twitter and its ilk. So getting your tweets up there would be valuable if, say, you were a political operator and you wanted people to hear good things about your candidate, or bad things about your rival. But were people doing this? Two researchers from Wellesley College in Massachusetts wondered.

Panagiotis Takis Metaxas and Eni Mustafaraj studied the local senate race and found that they were. They looked at 185,000 Twitter messages which mentioned the two competing candidates and found that there was plenty of astroturfing going on—where political supporters were creating fake accounts and repeating each other’s messages, and sending them to likely sympathizers, in the hope of their messages hitting the mainstream.

The researchers found one group, apparently linked to an Iowa Republican group, was sending out one tweet a second linking to websites “exposing” their rival’s missteps and misstatements. Overall, the message they sent reached more than 60,000 users. The researchers concluded that “the fact that a few minutes of work, using automated scripts and exploiting the open architecture of social networks such as twitter, makes possible reaching a large audience for free…raises concerns about the deliberate exploitation of the medium.”

The point here is not merely that you’re propagating a point of view. That’s just spam. But by setting up fake Twitter accounts and tweeting  and then repeating these messages, you’re creating the illusion that these views are widespread. We may ignore the first Twitter message we see exposing these views and linking to a website, but will we ignore the second or the third?

This discovery of Twitter astroturfing in one race has prompted researchers at Indiana University to set up a tool they call Truthy—after comedian Stephen Colbert’s term to describe something that someone knows intuitively from the gut—irrespective of evidence, logic or the facts. Their tool has exposed other similar attacks which, while not explosive in terms of growth, are, they wrote in an accompanying paper,  “nevertheless clear examples of coordinated attempts to deceive Twitter users.” And, they point out, the danger with these Twitter messages is that unless they’re caught early, “once one of these attempts is successful at gaining the attention of the community, it will quickly become indistinguishable from an organic meme.”

This is all interesting, for several reasons. First off, it’s only in the past few months that we’ve woken up to what political operators seem to be doing on Twitter. Secondly, while none of these cases achieves viral levels, the relative ease with which these campaigns can be launched suggests that a lot more people will try them out. Thirdly, what does this tell us about the future of political manipulation in social media?

I don’t know, but it’s naïve to think that this is just an American thing. Or a ‘what do you expect in a thriving democracy?’ thing. Less democratically minded organizations and governments are becoming increasingly sophisticated about the way they use the Internet to control and influence public opinion. Evgeny Morozov points to the Lebanon’s Hezbollah, “whose suave manipulation of cyberspace was on display during the 2006 war with Israel”; my journalist friends in Afghanistan say the Taliban are more sophisticated about using the Internet than the Karzai government or NATO.

The good news is that researchers are pushing Twitter to improve their spam catching tools to stop this kind of thing from getting out of hand. But I guess the bigger lesson is this: While social media is an unprecedented window on, and reflection of, the populace, it is also an unprecedented opportunity for shysters, snake oil salesmen and political operators to manipulate what we think we know.

It may be a great channel for the truth, but truthiness may also be one step behind.

The Dangers of Faking It

(my weekly column, syndicated to newspapers)

By Jeremy Wagstaff

A 40-ton whale jumped out of the water and crash-landed onto a sailboat the other day. The moment was caught on camera by a tourist, the whale suspended a few meters above the boat before it smashes into mast and deck, leaving behind a mass of barnacle and blubber.

Amazing stuff. So the first question from a TV interviewer to the survivors of this close encounter between man and mammal? “Was this picture Photoshopped?”

Sad, but I have to admit it was my first question too.

Photoshopping—the art of digitally manipulating a photo—has become so commonplace that it probably should be the first question we ask when we see a photo.

After all, it’s understood that every photo in every fashion magazine in the world is Photoshopped—a wrinkle unwrinkled here, eye unbagged there, an inch lost or gained below and above the midriff. We assume, when we look at a flattering photo of a celebrity that it was Photoshopped first (apparently every celebrity has a Photoshopper to do just this.)

But what of news photos? How do we feel about manipulation then?

Take the latest hoo-ha over some BP photos. Turns out that some photos on its website were tweaked to make BP look a bit more on-the-ball about monitoring the Gulf oil spill than it really was. Blank screens at its Houston command center were filled with images copied from other screens, prompting a search of BP’s website for other altered photos.

Another photo showed a helicopter apparently approaching the site of the spill. Upon closer inspection the helicopter was actually on the deck of an aircraft carrier. One can only guess why BP thought it necessary to make the chopper look as if it was flying.

BP, to its credit, has come clean and posted all the photos to a Flickr page “for the sake of transparency.”

But of course, it’s not enough. First off, the explanation is weasel-like: it places the blame on a “contract photographer” and writes vaguely of incidents where “cut-and-paste was also used in the photo-editing process.” It promises to instruct the photographer not to do it again and “to adhere to standard photo journalistic best practices.”

Well, yes and no. I’m willing to bet that a contract photographer did not make these kinds of decisions alone. And to suggest that a photographer contracted by BP to make photos for BP is somehow being asked to perform as a photo journalist is disingenuous.

I’m guessing, for example, that if the contract photographer had snapped some images of dying pelicans or oil-heavy beaches they wouldn’t be posted to the BP website “to adhere to standard photo journalistic best practices.” (In fact it’s quite fun to browse their photo gallery and look at how carefully the photos have been collected and presented. Compare them with others on Flickr, the titles of which sound unfortunately like items on a menu: “Hermit Crabs In BP Oil,” for example.)

Of course, no one expects BP to publish anything that may undermine its position. The problem lies with the fact that someone, somewhere in BP thought it worth tampering with what it did publish to improve its position.

Some have argued, so what? They fiddled with a couple of photos to make themselves seem a bit more industrious than they really were. So what?

Well, I would have thought it obvious, but the fact that people have argued this suggests it requires an answer. First off, it was bloggers who exposed the fraud. Hats off to them. A sign that crowd-sourcing this kind of thing works.

Secondly, while in itself more pathetic than malign, the manipulation proves that manipulation happens. We (well, not we journalists, but we bloggers) checked, and found the photos were faked. What else has BP faked?

Suggesting it’s the work of some rogue contract photographer doesn’t cut it. If BP’s PR crew knew what they were doing, and held themselves to “stand photo journalistic best practices, ” they would have spotted the amateurish Photoshopping and taken action.

Instead they didn’t spot it, or spotted it and didn’t care, or they actually commissioned it. Or did it themselves. Whatever, they didn’t come clean, so to speak, until they’d been had, and then wheeled out the “transparency defense”—a tad too late, I fear, to convince anyone that that’s where their instincts lay.

Photos, you see, are pretty strong stuff.

Since their invention we have granted them special powers. Photographs preserve information and speak to us in a way that words do not—and, perhaps, video. Think of all those photos that have captured not only a moment but a slice of history: 9/11, the Vietnam War, the Spanish Civil War.

The problem is that we’re gradually waking up to the fact that photographs lie. It’s an odd process, this learning about the power of misrepresentation. It’s part technology, part distance, part a growing understanding that we have ascribed photos a power and finality they don’t deserve.

Let me put it more simply through an example: Robert Capa’s famous 1936 photo of the Falling Soldier. This one photo seemed to sum up not only the Spanish Civil War, but war itself. Only, it’s now widely believed the photo was staged, that Capa may have asked the soldier to fake his death. Does it matter?

Capa’s biographer Richard Whelan argues it doesn’t, that “the picture’s greatness actually lies in its symbolic implications, not in its literal accuracy.”

This, is, of course, incorrect. Its symbolic implications lie in its accuracy.

And, of course, this is the problem. We need our photos to say something, to express a view that supplements, that goes beyond, the text that might accompany them, the truth that we need to have illustrated for us. And that’s where the problem begins.

Capa may not have intended his photo to be quite so iconic. After all, he took a bunch of photos that day, most of them unremarkable. An editor decided this was one of those he would publish.

Photographers are now aware they get one shot. So they’re pushed to capture more and more in the frame—more, perhaps, than was ever there. And, it turns out, have been doing so for as long as there have been cameras. One of the first war photographs, of the Crimean War’s Valley of the Shadow of Death by Roger Fenton in 1855, was staged—by physically moving cannonballs to the middle of the road.

Nowadays the cannonballs could have been moved more easily: by Photoshop. A mouse click can add smoke to burning buildings in the Lebanon, to thicken a crowd, darken OJ Simpson’s face, or, in the case of Xinhua photographer Liu Weiqing, add antelope to a photo of a high-speed train.

Just as digitizing makes all this easier, so it makes it easier to spot errors. The problem is that we don’t have time to do this, meaning that it falls to bloggers and others online to do the work for us.

But it’s not as easy as it may look with hindsight, and the fact that we create a distinction between images we expect to be faked—fashion, celebrity, sex—and those we don’t—news, suggests that we either have to get a lot better at spotting fakery or we need to insist that photos contain some watermark to prove they are what they’re purporting to be.

The bottom line is that it’s probably a good thing that the first question we ask of a photo is whether it’s fake. Turns out that we should have been asking that question a long time ago.

But there’s another possibility: that there may come a point where we just don’t trust photos anymore. It’s probably up to us journalists to find a way to stop that from happening.

Skype Cuts Some Rates

Skype has lowered rates of its SkypeOut service to some destinations as part of its first anniversary celebrations. Here are the details:

Six major new countries have been added to the SkypeOut Global Rate, a fixed, low-cost rate of 1.7 Euro cents per minute to popular calling destinations. China, Greece, Taiwan, Hong Kong, Poland and Switzerland have joined more than 20 additional destinations in the Global Rate. Skype has also significantly lowered SkypeOut rates for calling numbers in Armenia, Bangladesh, Belarus, Bulgaria, the Cook Islands, Croatia, the Czech Republic, Denmark, the Dominican Republic, Estonia, Finland, Germany, Hungary, Iceland, India, Indonesia, Ireland, Korea, Lebanon, Luxembourg, Malaysia, Mexico, the Netherlands, Poland (mobile), Portugal, Russia, Slovakia, South Africa, Spain, Sri Lanka and Turkey.

I’m not quite clear from the press release, but it sounds as if this is an average reduction of 15%.

It’s not all good news: Prices for SkypeOut calls to Saudi Arabia, Papua New Guinea, Oman, Lichtenstein and Haiti numbers will increase slightly.