“One Technician Unplugged The Estonian Internet”

In all the hoo-ha about the Arab Revolutions some interesting WikiLeaks cables seem to be slipping through the net. Like this one from 2008 about Estonia’s view of the cyberattack on Georgia. Estonia had learned some tough lessons from Russia’s cyberattack on its defenses the previous year, so was quick to send cyber-defense experts to “help stave off cyber-attacks emanating in Russia”, according to the Baltic Times at the time.

The cable, dated Sept 22 2008, reports on meetings with Estonian officials on both the lessons from its own experience and some candid commentary on Georgia’s preparedness and response. Here are some of the points:

  • Russia’s attack on Georgia was a combination of physical and Internet attack. “[Hillar] Aarelaid [Director of CERT-Estonia] recapped the profile of the cyber attacks on Georgia: the country’s internet satellite or microwave links which could not be shut down (inside Russia) were simply bombed (in southern Georgia).”
  • Russia seemed to have learned some lessons from the Estonia attack, suggesting that Estonia was a sort of dry-run: “the attacks on Georgia were more sophisticated than those against Estonia, and did not repeat the same mistakes. For example, in 2007, the ‘zombie-bots’ flooded Estonian cyberspace with identical messages that were more easily filtered. The August 2008 attacks on Georgia did not carry such a message.”
  • That said, Georgia itself learned some lessons, Aarelaid was quoted as saying. While it failed to keep “archives of collected network flow data, which would have provided material for forensic analysis of the attacks,” the country “wisely did not waste time defending GOG (Government of Georgia) websites, he said, but simply hosted them on Estonian, U.S. and public-domain websites until the attack was over.” This “could not have been taken without the lessons learned from the 2007 attacks against Estonia.”
  • Estonia felt it got off lightly, in that it would have made more sense to have tried to trigger a bank-run. (This is not as clear as it could be). “Aarelaid felt that another cyber attack on Estonia ‘…won’t happen again the same way…’ but could be triggered by nothing more than rumors. For example, what could have turned into a run on the banks in Estonia during the brief November 2007 panic over a rumored currency devaluation was averted by luck. Money transfers into dollars spiked, he explained, but since most Estonians bank online, these transfers did not deplete banks’ actual cash reserves.” I take this to mean that if people had actually demanded cash, rather than merely transfered their money into another currency online, then it could have had far more damaging effects on the Estonian banking system.
  • Finally, the debate within Estonia focused on clarifying “who has the authority, for example, to unplug Estonia from the internet. In the case of the 2007 attacks, XXXXXXXXXXXX noted, it was simply one technician who decided on his own this was the best response to the growing volume of attacks.”

My War On ATM Spam and Other Annoyances

By Jeremy Wagstaff

(This is a copy of my weekly syndicated column)

You really don’t need to thank me, but I think you should know that for the past 10 years I’ve been fighting a lonely battle on your behalf. I’ve been taking on mighty corporations to rid the world of spam.

Not the spam you’re familiar with. Email spam is still around, it’s just not in your inbox, for the most part. Filters do a great job of keeping it out.

I’m talking about more serious things, like eye spam, cabin spam, hand spam,  counter spam and now, my most recent campaign, ATM spam.

Now there’s a possibility you might not have heard of these terms. Mainly because I made most of them up. But you’ll surely have experienced their nefarious effects.

Eye spam is when something is put in front of your face and you can’t escape from it. Like ads for other movies on DVDs or in cinemas that you can’t skip. Cabin spam is when flight attendants wake you from your post-prandial or takeoff slumber to remind you that you’re flying their airline, they hope you have a pleasant flight and there’s lots of duty free rubbish you wouldn’t otherwise consider buying wending its way down the aisle right now.

Then there’s hand-spam: handouts on sidewalks that you have to swerve into oncoming pedestrian traffic to avoid. Counter spam is when you buy something and the assistant tries to sell you something else as well. “Would you like a limited edition pickled Easter Bunny with radioactive ears with that?”

My rearguard action against this is to say “if it’s free. If it’s not, then you have given me pause for thought. Is my purchase really necessary, if you feel it necessary to offer me more? Is it a good deal for me? No, I think I’ll cancel the whole transaction, so you and your bosses may consider the time you’re costing me by trying to offload stuff on me I didn’t expressly ask for.” And then I walk out of the shop, shoeless, shirtless, or hungry, depending on what I was trying to buy, but with that warm feeling that comes from feeling that I stuck it to the man. Or one of his minions, anyway.

And now, ATM spam. In recent months I’ve noticed my bank will fire a message at me when I’m conducting my automated cash machine business offering some sort of credit card, or car, or complex derivative, I’m not sure what. I’ve noticed that this happens after I’ve ordered my cash, but that the cash won’t start churning inside the machine until I’ve responded to this spam message.

Only when I hit the “no” button does the machine start doing its thing. This drives me nuts because once I’ve entered the details of my ATM transaction I am usually reaching for my wallet ready to catch the notes before they fly around the vestibule or that suspicious looking granny at the next machine makes a grab for them. So to look back at the machine and see this dumb spam message sitting there and no cash irks me no end.

My short-term solution to this is to look deep into the CCTV lens and utter obscenities, but I have of late realized this may not improve my creditworthiness. Neither has it stopped the spam messages.

So I took it to the next person up the chain, a bank staff member standing nearby called Keith. “Not only is this deeply irritating,” I told him, “but it’s a security risk.” He nodded sagely. I suspect my reputation may have preceded me. I won a small victory against this particular bank a few years back when I confided in them that the message that appeared on the screen after customers log out of their Internet banking service—“You’ve logged out but you haven’t logged off”, accompanied by a picture of some palm trees and an ad for some holiday service—may confuse and alarm users rather than help them. Eventually the bank agreed to pull the ad.

So I was hoping a discreet word with Keith would do the trick. Is there no way, I said, for users to opt out of these messages? And I told him about my security fears, pointing discreetly to the elderly lady who was now wielding her Zimmer frame menacingly at the door. Keith, whose title, it turns out, is First Impression Officer, said he’d look into it.

So I’m hopeful I will have won another small battle on behalf of us consumers. Yes I know I may sound somewhat eccentric, but that’s what they want us to think. My rule of thumb is this: If you want to take up my time trying to sell me something because you know I can’t escape, then you should pay for it—the product or my time, take your pick.

Now, while I’ve got your attention, can I interest you in some of those Easter bunny things? They’re actually very good.

Social Media and Politics: Truthiness and Astroturfing

By Jeremy Wagstaff

(this is a column I wrote back in November. I’m repeating it here because of connections to astroturing in the HBGary/Anonymous case.)

Just how social is social media? By which I mean: Can we trust it as a measure of what people think, what they may buy, how they may vote? Or is it as easy a place to manipulate as the real world.

The answers to these questions aren’t of academic interest only. They go right to the heart of what may be our future. More and more of our world is online. And more and more of our online world is social media: A quarter of web pages viewed in the U.S. are on Facebook. So it’s not been lost on those who care about such things that a) what we say online may add up to be a useful predictor of what we may do at the shops, the movies, at the polling booth. And b) that social media is a worthwhile place to try to manipulate what we think, and what we do at the shops, the movies—and at the ballot box.

There is plenty of evidence supporting the former. Counting the number of followers a candidate has on Facebook, for example, is apparently a pretty good indicator of whether they’ll do well at the ballot box. The Daily Beast set up something called the Oracle which scanned 40,000 websites—including Twitter—to measure whether comments on candidates in the recent U.S. elections were positive, negative, neutral or mixed. It predicted 36 out of 37 Senate races and 29 out of 30 Governors’ races and nearly 98% of the House races. That’s pretty good.

Dan Zarrella, a self-styled social media scientist, counted the followers of the twitter feeds of 30 senate, house and governor races and found that in 71% of the races, the candidate with the most Twitter followers was ahead in the polls. And Facebook found that candidates with more Facebook fans than their opponents won 74% of House races, and 81% of Senate races. More than 12 million people used the “I Voted” button this year, more than double that in 2008.

Why is this interesting? Well, social media, it turns out, is quite a different beast to even recent phenomena such as blogs. Social media, it turns out, really is social, in that more than previous Internet methods of communication, it reflects the views of the people using it. It is, one might say, democratic.

A study by researchers from the Technical University of Munich of the 2009 federal parliamentary elections in Germany, for example, revealed that, in contrast to the bulletin boards and blogs of the past, Twitter was reflective of the way Germans voted. Unlike bulletin boards and blogs, they wrote, “heavy users were unable to impose their political sentiment on the discussion.” The large number of participants, they found, “make the information stream as a whole more representative of the electorate.”

In other words, social media is as much a battleground for hearts and minds as the rest of the world. Even more so, perhaps, because it’s easier to reach people. Forget knocking on doors or holding rallies: Just build a Facebook page or tweet.

And, maybe, hire some political operators to build a fake movement, aka astroturfing?

Astroturfing, for those not familiar with the term, is the opposite of grassroots. If you lack the support of ordinary people, or don’t have time to get it, you can still fake it. Just make it look like you’ve got grassroots support. Since the term was coined in the mid 1980s it’s become popular activity by marketers, political operators and governments (think Chinese 50-cent blogging army). Astroturfing, in short, allows a politician to seem a lot more popular than he really is by paying folk to say how great he is.

Whether social media is ripe for astroturfing isn’t clear. On one hand, we know that the Internet is full of fakery and flummery: Just because your inbox is no longer full of spam doesn’t mean the Internet isn’t full of it—87%, according to the latest figures from MessageLabs. You don’t see it because the filters are getting better at keeping it away from you. Twitter, by contrast, is much less spammy: the latest figures from Twitter suggest that after some tweaks earlier this year the percentage of unwanted messages on the service is about 1%.

So Twitter isn’t spammy, and it broadly reflects the electorate. But can it be gamed?

We already know that Twitter can spread an idea, or meme, rapidly—only four hops are needed before more or less everyone on Twitter sees it. In late 2009 Google unveiled a new product: Real time search. This meant that, atop the usual results to a search, Google would throw in the latest matches from the real time web—in other words, Twitter and its ilk. So getting your tweets up there would be valuable if, say, you were a political operator and you wanted people to hear good things about your candidate, or bad things about your rival. But were people doing this? Two researchers from Wellesley College in Massachusetts wondered.

Panagiotis Takis Metaxas and Eni Mustafaraj studied the local senate race and found that they were. They looked at 185,000 Twitter messages which mentioned the two competing candidates and found that there was plenty of astroturfing going on—where political supporters were creating fake accounts and repeating each other’s messages, and sending them to likely sympathizers, in the hope of their messages hitting the mainstream.

The researchers found one group, apparently linked to an Iowa Republican group, was sending out one tweet a second linking to websites “exposing” their rival’s missteps and misstatements. Overall, the message they sent reached more than 60,000 users. The researchers concluded that “the fact that a few minutes of work, using automated scripts and exploiting the open architecture of social networks such as twitter, makes possible reaching a large audience for free…raises concerns about the deliberate exploitation of the medium.”

The point here is not merely that you’re propagating a point of view. That’s just spam. But by setting up fake Twitter accounts and tweeting  and then repeating these messages, you’re creating the illusion that these views are widespread. We may ignore the first Twitter message we see exposing these views and linking to a website, but will we ignore the second or the third?

This discovery of Twitter astroturfing in one race has prompted researchers at Indiana University to set up a tool they call Truthy—after comedian Stephen Colbert’s term to describe something that someone knows intuitively from the gut—irrespective of evidence, logic or the facts. Their tool has exposed other similar attacks which, while not explosive in terms of growth, are, they wrote in an accompanying paper,  “nevertheless clear examples of coordinated attempts to deceive Twitter users.” And, they point out, the danger with these Twitter messages is that unless they’re caught early, “once one of these attempts is successful at gaining the attention of the community, it will quickly become indistinguishable from an organic meme.”

This is all interesting, for several reasons. First off, it’s only in the past few months that we’ve woken up to what political operators seem to be doing on Twitter. Secondly, while none of these cases achieves viral levels, the relative ease with which these campaigns can be launched suggests that a lot more people will try them out. Thirdly, what does this tell us about the future of political manipulation in social media?

I don’t know, but it’s naïve to think that this is just an American thing. Or a ‘what do you expect in a thriving democracy?’ thing. Less democratically minded organizations and governments are becoming increasingly sophisticated about the way they use the Internet to control and influence public opinion. Evgeny Morozov points to the Lebanon’s Hezbollah, “whose suave manipulation of cyberspace was on display during the 2006 war with Israel”; my journalist friends in Afghanistan say the Taliban are more sophisticated about using the Internet than the Karzai government or NATO.

The good news is that researchers are pushing Twitter to improve their spam catching tools to stop this kind of thing from getting out of hand. But I guess the bigger lesson is this: While social media is an unprecedented window on, and reflection of, the populace, it is also an unprecedented opportunity for shysters, snake oil salesmen and political operators to manipulate what we think we know.

It may be a great channel for the truth, but truthiness may also be one step behind.

Podcast: Social Media and Social Conflict

The BBC World Service Business Daily version of my piece on the relationship between communications and political change .  (The Business Daily podcast is here.)   

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

To listen to Business Daily on the radio, tune into BBC World Service at the following times, or click here.

 Australasia: Mon-Fri 0141*, 0741 

East Asia: Mon-Fri 0041, 1441 
South Asia: Tue-Fri 0141*, Mon-Fri 0741 
East Africa: Mon-Fri 1941 
West Africa: Mon-Fri 1541* 
Middle East: Mon-Fri 0141*, 1141* 
Europe: Mon-Fri 0741, 2132 
Americas: Tue-Fri 0141*, Mon-Fri 0741, 1041, 2132

Thanks to the BBC for allowing me to reproduce it as a podcast.

PR That Doesn’t Bark, Or Barks Too Much

This is my weekly Loose Wire Service column, an edited version of which was recorded for my BBC World Service slot. Audio to follow.

There’s a moment in All The President’s Men that nails it.

Bob Woodward is telling his editors about when he’d called up the White House to confirm that Howard Hunt, one of the Watergate burglars, worked there as a consultant for Charles Colson, special counsel to President Nixon. “Then,” Woodward tells his editors, “the P.R. guy said the weirdest thing to me. (reading) ‘I am convinced that neither Mr. Colson nor anyone else at the White House had any knowledge of, or participation in, this deplorable incident at the Democratic National Committee.'”

Isn’t that what you’d expect him to say, one of the editors says. Absolutely, Woodward replies. So?

Woodward, the script says, has got something and he knows it. “I never asked them about Watergate,” he says. “I simply asked what were Hunt’s duties at the White House. [A beat.] They volunteered that he was innocent when nobody asked if he was guilty.”

This, to me, is not only great cinema but classic journalism. It’s a classic PR error, and you can see it all the time. Not always as dramatically, but it’s there if you notice it. To say or do something that reveals what your client really cares about—and how much they care about it.

I see it in breathless press releases that I never asked for. Read: We really, really need to get this information out. We’re desperate.  So you think I’m just a press release churning machine, is that it?

I see it in interviews when the media-trained exec grinds each answer back to the message bullet-points he’s got tattooed into his brain:

When did you start to think there was a problem with the building? When the people start falling out one side? We have always focused our synergistic approach to inbuilding personnel customer management by being people-centred, and while we regret the involuntary vertical defenestrations, we’re sure that until they exited the building extramurally they felt as empowered as we did about the efficiencies we implemented by removing what turned out to be vital structural features.

I see it in unsolicited pitches that offer interviews with CEOs who really should be busier than this—particularly ones that end with “when is your availability?” as if to say, if we give the impression you already said yes, maybe you might. So you think I’m stupid and gullible, is that it?

I see it in PR companies which are a little too eager to lend us technology columnists a gadget. Read: this is a product that isn’t good enough to sell on its own, so we’ve got a warehouse full of them to try to get fellas like you interested.

I see it in a cupboard full of gadgets that PR companies promised to come pick up after I finished reviewing them but never did. Read: even the client doesn’t really care any about this. And it certainly belies the talk about the review units being in hot demand.

This sounds like heaven, I agree, but the reality is that there is such a thing as too many gadgets. Especially ones which weren’t that good to start with.

Just this morning I saw it in a emailed response from a major software company in response to a very specific question I’d asked. The question was sort of addressed, but tagged onto it was pure PR-speak, addressing a bunch of imaginary questions I hadn’t asked, or even implied I was interested in. Lord knows how long they took to put this together. Actually I know—10 days, because that’s how long I had to wait.

As a journalist, when you get one of these responses your instinct is to remove clumps of hair from your own head, or, if already clump-free, those of family members or passers-by. But actually, buried in the robot speak are the nuggets.

The email in question talked of “a community effort…to understand and unravel this extremely complex issue.”  I’m not going to tell you what the complex issue is, but the words are a giveaway: “We couldn’t figure this one out ourselves so we had to turn to companies we’d have much preferred to have humiliated by getting there first.”

Subtext: We’re not actually as good at this as we thought, or our customers assume. We were out of our depth so we’re falling back on the old “we’re all in this together” trick. Works great if you’re at the bottom of the bucket and the crab above you looks like he’s about to make a break for it.

Buried in all that unrequested bilge are quite a few good story ideas. Nothing tells you a company’s weak spot than PR guff dreamed up in hope of putting journalists off the scent. Thanks, big software company, for pointing out your sensitive spots!

Of course, coupled with the “But I never asked them that” is the Sherlock Holmes’ dog that didn’t bark clue. In Conan Doyle’s short story Silver Blaze Holmes is summoned to investigate the disappearance of the eponymous racehorse. The less-than-impressed Scotland Yard detective asks Holmes: “Is there any other point to which you would wish to draw my attention?”

Holmes: “To the curious incident of the dog in the night-time.”
Detective: “The dog did nothing in the night-time.”
Holmes: “That was the curious incident.”

In this case, of course, the dog didn’t bark because it recognised its owner, who turned out to be the guilty party. Whereas the Woodward Clue is about what PR folk put in that wasn’t asked for, the Holmes Clue is about what they leave out. In PRdom this can be requests that go unanswered, questions that are get very short answers while others get long ones, answers that skirt the question, or requested review units that somehow never arrive.

In the case of my big software company response, it’s the fact that they omitted to really answer the question I asked—and got very vague when I sought a timeline. It’s not rocket science to know when smoke is wafting in your direction.

So how should PR avoid these pitfalls? Well, the first rule is to answer the question, or tell the journalist you’re not answering it—and preferably why. Not answering it by pretending she didn’t ask it is going to infuriate a journalist and flag that there’s something there worth chasing.

But worse, don’t answer questions you weren’t asked. At least not directly. You can let it be known there’s more if they want it, but don’t forcefeed them. Journalists aren’t all Robert Redfords, but neither are they foie gras geese.

You can always tell if a journalist knows you’re not answering the question: they’ll nod a lot. Nodding a lot doesn’t mean “I’m agreeing with you, and I’m just desperate to hear more”, it’s “Why is this guy telling me stuff in the press release totally unrelated to what I asked? I wonder if he’d mind if I tore his hair out?” Nod, nod, nod. The nodding, of course, is a desperate attempt to speed up time so he can ask one more question and get to the pub before it shuts. Never works, but it’s a sort of reflex action in the face of too much barking, or not enough.