Locking Users In the Smart Way

By | November 22, 2011

DSC09945

I was directed to this excellent piece, A Victim Treats His Mugger Right : NPR, via Facebook last night.  And it made me realise how publishers don’t make the most of that kind of referral.

There’s plenty of evidence to suggest that nowadays we tend to get more and more of our reading from peer suggestions like this. Navigating News Online from the Project for Excellence in Journalism estimates that while Google still accounts for 30% of traffic to the main U.S. news sites, Facebook is the second or third most important driver of traffic. And yet all news sites do to respond to that is put a Facebook like button on their stories and cross their fingers.

What they should be doing is create what I would call “corners”, but might also be called “series” or “seasons”. The same PEJ report notes that casual visitors to a news website account for the vast majority of visitors–USAToday, for example, a third of users spent between one and five minutes on the paper’s website each month. Power users–those that return more than 10 times a month and spend more than an hour there–account for an average of 7% of total users for the top 25 news sites.

This represents a huge failure on the part of websites to get users back, and spend more time there.

And I don’t see a lot of websites doing much about it. Which is a shame, because it’s relatively easy. You just need to think of your publication as a TV network, and your content as individual brands. Or, to continue the analogy, seasons.

If I start watching Archer, or Secret Millionaire and I enjoy it, chances are I’ll set my TV to record each episode. I like one bite; I want take the whole season. It may not be smart television, but it’s smart branding. But apart from columnists and a few other regular features, we don’t think the same when it comes to our content.

Take the NPR piece. It’s about a New York social worker called Julio Diaz who is mugged. He gives him his wallet, and then, invites the mugger to dinner. It’s a touching tale, and has been tweeted 635 times, shared on Facebook more than 200,000 times and has 92 comments. And, get this: It was published on March 28, 2008. More than three years ago. I didn’t even notice that when I was pointed to the story by a friend on Facebook. And I wouldn’t have cared: Once I started reading the story I was hooked, and listened to the recording all the way through.

This piece comes from a series called StoryCorps, a magnificent oral history project for which NPR is one of the national partners. Through three permanent StoryBooths and a traveling MobileBooth it has recorded more than 35,000 interviews since 2003. It has its own StoryCorps Facebook page, with more than 25,000 followers and a lively feel to it. (I recommend watching some of the animated accounts; they’re very moving.)

My point is this: StoryCorps is like a TV series, Loyalty is built around the brand itself: People know that if they like one item, they’re sure to like the next. And yet we do so little in our media products to make the most of this human desire to hear/read/watch more of something we like. Because we are news people, we think news is enough of a brand, we forget that for most people news is not in itself a reason to visit a news website. We are instead looking for more of what we may have liked before, and if we can’t find it, we won’t come back again.

Hence the dreadful statistics mentioned above.

So how to change this? Well, looking at the NPR page of the Julio Diaz story, we see a lot of the usual efforts to retain interest. There’s the most popular slot on the right, the related stories below, and then below that More From This Series. There are also links to subscribe to the podcast of the series, and to the RSS feed for this series.

This is all good. But it’s just the start. Let’s break down what these elements are:

  • The twitter/facebook like buttons are fine. But these are just ways of driving non-users to  to the same individual piece of content–in other words, this page.
  • The related links are ways of driving casual users to other internal content.
  • The podcast/RSS are ways of converting casual users to regular users of the content.

By defining them like this, it’s clear that only the last one really has any long-term objective to it. If we can get a user to subscribe to the podcast or the RSS feed, then we have actually got a loyal user–someone who is likely to spend more than a few minutes a month on our site, and to actually demonstrate some loyalty to our brand.

(Included in this last section is the Facebook page for a publication too, but I’m not going to go into that here.)

Now it’s probably no accident that RSS and podcasts are in steep decline. (Evidence for the decline is anecdotal, because usage of readers like Google Reader are still rising, but the rate of increase is falling, according to this piece on Quora; besides, a lot of other RSS readers have died off: Bloglines was closed down last September and NetNewsWire was sold earlier this month.)

Searches for the term RSS on Google have been falling steadily since 2006:

And podcasts haven’t fared much better. Their hey day was 2005 and 2006:

I think it’s no accident that both peaked around five years ago. That was the era of Web 2.0, and now we’re into the era of Social Media, which is dominated by Facebook and Twitter. Again, no accident that both use RSS, or used,  but have since moved on, or tried to move on.

The bottom line with both RSS and podcasts is that both have had their day. Both are a little too nerdy for most people: RSS is still way too tricky for ordinary users to master, and podcasts may be relatively easy to grab from iTunes, but still require a degree of managing that clearly doesn’t sit well.

Web 2.0 has moved on, and as social media has become more popular, and the tools for using it more user-friendly, podcasts and RSS have been left behind.

But, and here is the key point, Facebook and Twitter haven’t replaced them. RSS was/is a way for me to get your content to come to me. Facebook doesn’t really offer that, and neither, if you think about it, does Twitter.

For me to see your content I have to go to your Facebook page, or, alternatively, wait for it to pop up in my user feed. The latter is true of Twitter.

RSS allowed me to decide which of your content I liked–assuming you offered more than a single feed–and then to be able to access that on any device I liked. Podcasts were similar, but for audio and video. Now both are more or less dead, and, at least in terms of building loyalty to media channels, we’re not only back at square one, we’ve allowed other platforms–Facebook, Twitter, and now Google+–to place themselves between us and our reader.

I think this illustrates the weak thinking that media has tolerated. We need, somehow, to develop successor tools to RSS and podcasts that help us to build pipes direct to our readers/users.

Some people are trying this with iPhone/iPad/Android apps. It’s a start. But it doesn’t scale particularly well: The more apps there are, the less time people will spend on them.

And, more important, it’s still making a fundamental mistake by assuming that our readers are interested in us as a brand. They’re not. They’re interested in the channels we offer–thinking of them as seasons, I hope makes more sense, because we don’t just watch anything on a channel, we watch shows we like.

So we need to break down our content in this way, and then develop tools–apps, if you like–which cater to this desire and interest in content that is directly related (not automatically selected, or ‘may be related’) to the content that a user is interested in.

This is not that hard. NPR could build an app which helps to make it easier for anyone interested in the StoryCorps series to get all that content in a more straightforward way than RSS or podcast.

But it shouldn’t stop there. Measuring interest in a series should spur imaginative regeneration, repurposing and forking of content. The piece I mentioned, for example, had clearly resonated with the audience and should be paired with follow-up stories. Indeed, the StoryCorps corner of the NPR website should be a brand in itself, a community where editors regularly interact with readers and find ways to turn those casual users into regulars.

This is not rocket science. It’s simple math. At the moment we’re allowing other platforms to determine what people read on our website, and when they do drop by, we rely on HTML code, widgets and buttons to try to keep them.

Worst, we think merely about ‘keeping’ in terms of ‘sticky’: distracting the reader by luring other stories in front of their nose until eventually they get bored, or go home, or die, or something. I use the same tricks to entertain my 9-month-old. We need to be smarter than this.

Thinking our content in terms of ‘series’ might be a good place to start.

The Big Boys’ Mea Culpas

By | November 22, 2011

I find it interesting that companies can get things so wrong. News Corp just sold off Myspace for a fraction of its original price today, effectively admitting it didn’t get social media.

Microsoft famously came late to the table with the Internet, and then has been late to more or less every party since. It’s now come out with Microsoft 365, an awful name for a product that is basically an admission that Google Docs is good enough for most people, and that Microsoft Office is largely toast (an incorrect assumption, I reckon; I still can’t do without it.)

Then we have Google. Google has made a surprising number of missteps: Buzz, Wave (dumping it as much as hyping it, in my view.) Now, with the launch of Google+, they’re also acknowledging that they got the Web wrong: Instead of seeing it as a network, they saw it as a library. This from AllThingsD’s Liz Gannes, who asked Vic Gundotra why he and Bradley Horowitz had spent so much of the launch self-flagellating about why Google was so late to the social media dance:

Google Opens Up About Social Ambitions on Google+ Launch Day – Liz Gannes – Social – AllThingsD: “Gundotra: It’s just sincere. I don’t think it’s anything more than that. We do have a mission that we’ve been working on for a long time: organizing the world’s information and making it universally accessible and available. And when you look at the web today it’s obvious it’s not just about pages, it’s about people. It’s not just about information, it’s about what individuals are doing. So I think we have to do that in a coherent way. We think there’s just tremendous room to do great stuff.”

Well put: Google really didn’t get the the web. And probably still doesn’t; one might argue that the algorithms they use to rank pages are having to be constantly updated because they don’t really reflect the dynamic nature of most web pages these days. I am not sure what I mean by that so I’ll leave it for now.

Finally, what might one ask about Apple? Where have they gone wrong? MobileMe is a pretty small misstep. Quibbles with OSX are relatively small: I get the sense that a lot of the things wrong with the OS aren’t because they keep tweaking things (the usual complaint from Windows users) but that there’s a stubbornness about not changing things: A weak file explorer (Finder), an inability to resize windows except from one corner, a confusing division of function between dock icons, menu bar icons, menu bar menus, in-window menus etc etc…

But apart from those gripes with the Mac OS, you gotta hand it to Apple. No big mea culpas, at least in the past decade.

Media: Reducing Story Production Waste

By | November 22, 2011

In trying to change news to match the new realities of the Interwebs, media professionals are still somewhat stuck in old ways of doing things. One is to fail to address the massive waste in news production–or at least parts of it.

So what potential waste is there? Well, these are the obvious ones:

  • Gathering: Reporters/trips/stories per trip/matching other outlets
  • Editing: The number of people who look at a story before it is published/time a story takes to work through the system

I’m more interested, however, in the amount of waste from material generated. Think of it like this:

Inputs:

  • Story idea
  • Logistics (travel/communications/reporting tools)
  • Interviews, multimedia and other material generated

Outputs:

  • Story
  • Photo
  • ?Video

Wastage:

  • All content not used in story (some may be reused, eg photos, sidebars but rarely)
  • All content used that’s not reused/repurposed.

This seems to me to be extremely wasteful in an industry in so much pain. Any other industry wouldn’t just look to pare back on factors of production but to also minimize the waste generated.

Any journalist will know just how much we’re talking about. Say you interview five people for a story. Even a stock market report is going to involve five interviews of at least five minutes. At about 150 words a minute that’s nearly 4,000 words. The stock market report itself is going to be about 500 words, maybe 600. That’s a 3,600 words–say 2,500, allowing for the reporter’s questions, and some backchat–gone to waste. For 500 words produced we had to throw out 2,000.

Yes, I know it’s not a very scientific way of doing things, but you get my point. Most journalists only write down the quotes they need for the story, and many will delete the notes they’ve taken if they’re typing them on the screen in the same document they’re writing the story on. So all that material is wasted.

A good reporter will keep the good stuff, even if it’s not used in the story, and will be able to find it again. But I don’t know of any editorial system that helps them do that–say, by tagging or indexing the material–let alone to make that available to other reporters on the same beat.

This is where I think media needs to change most. It needs to assume that all material gathered by journalists, through interviews, research, even browsing, is potentially content. It needs to help journalists organise this material for research, but, more importantly to generate new content from.

Take this little nugget, for example, in a New York Times, story, Nokia Unveils a New Smartphone, but Not a Product of Its Microsoft Deal – NYTimes.com: The reporter writes of the interviewee, Nokia’s new chief executive Stephen Elop: ”During the interview, he used the words “innovate” or “innovation” 24 times.”

I really like that. It really captures something that quotes alone don’t. We would call it “interview metadata”–information about the interview that is not actual quotes or color but significant, nonetheless.

Whether the journalist decided to count them early on during the interview, or took such good notes a keyword search or manual count after was enough, or whether he transcribed the whole thing in his hotel room later, I don’t know. (A quibble: I would have put the length of the interview in that sentence, rather than an earlier one, because it lends the data some context. Or one could include the total number of words in the interview, or compare it with another word, such as “tradition” or something. Even better create a word cloud out of the whole interview.)(Update: here’s another good NYT use of metadata, this time the frequency of words in graduation speeches: Words Used in 40 Commencement Speeches – Class of 2011 – Interactive Feature – NYTimes.com)

The point? Elop is an executive, and he has a message. He wants to convey the message, and so he is using carefully chosen words to not only ensure they’re in any quote that’s used, but also to subliminally convey to the journalist the angle he hopes the journalist will adopt. By taking the interview metadata and presenting it separately, that objective, and strategy, will be well illustrated to the reader.

And, of course, you’ve reduced the story production wastage, or SPW, significantly.

Media can help this process by developing tools and offering services to maximise the usefulness of material gathered during research and interviews, and to reduce the time a journalist spends on marshalling this material.

Suggestions?

  • Transcription services, where journalists can send a recording and get the material back within the hour (or even as the interview is conducted, if the technology is available).
  • Push some of the content production to the journalist: let them experiment with wordclouds and other data visualization tools, not only to create end product but to explore the metadata of what they’ve produced.
  • Explore and provided content research and gathering tools (such as Evernote) to journalists so they don’t have to mess around too much to create stuff drawing on existing material they’ve gathered, for the story they’re working on, from previous research and interviews, and, hopefully, from that of colleagues.

A lot of my time training journalists these days is in these kinds of tools, and I’m always surprised at how little they are made use of. That needs to change if media is to find a way to make more use of the data it gathers in the process of creating stories.

Getting Paid for Doing Bad Things

By | November 22, 2011

I have recently received half a dozen offers of placing links in my blogs to reputable companies’ websites.

Think of it as product placement for the Internet. It’s been around a while, but I just figured out how it’s done, and it made me realise that the early dreams of a blogging utopia on the web are pretty much dead.

Here’s how this kind of product placement works. If I can persuade you to link to my product page in your blog, then my product will appear more popular and rise up Google’s search results accordingly. Simple.

An ad wouldn’t work. Google would see it was an ad and discount it. So one increasingly popular approach is for you to pay me to include a link in my blog. I mean, right in it: not as a link, or a ‘sponsored by’, but as a sentence, embedded, as it were, inside my copy.

I had some problem getting my head around this, so I’ll walk you through it. I add a sentence into my blog, and then turn one of the words in it into a link to the company’s website. For my trouble I get $150. The company, if it gets enough people like me to do this, will see their web site rise up through the Google ranks.

This is what the Internet, and blogs, have become. A somewhat seedy enterprise where companies–and we’re talking reputable companies here–hire ad companies to hunt out people like me with blogs that are sufficiently popular, and vaguely related to their line of business, to insert a sentence and a link.

If you’re not sure what’s wrong with this, I’ll tell you.

First off, it’s dodgy. If Google finds out about it it will not only discount the link in its calculations, but ban the website–my blog, in other words–from its index. Google doesn’t like any kind of mischief like this because it corrupts their search.

That’s why a) the blog needs to look vaguely related and b) it can’t just be any old sentence that includes the link. Google’s computers are sharp enough to spot nonsense.

That’s why kosher links are so valuable, and why there’s business in trying to persuade bloggers like me to break Google’s rules. If I get banned, my dreams of a profitable web business are gone. For the company and ad firm: nothing.

Second, it’s dodgy. It works on the assumption that all blog content is basically hack work and the people who write it are for sale. I think that’s why I loathe it so much. It clearly works: When I got back to one company that approached me, I was told the client’s request book had already been filled.

With every mercenary link sold they devalue the web.The only thing that might make my content valuable is that it’s authentic. It’s me. If I say I like something, I’m answerable for that. Not that people drop by to berate me much, but the principle is exactly the same as a journalistic one: Your byline is your bond and not a checkbook.

Gay Lesbian Syrian Blogger? Or a Bearded American from Edinburgh?

By | November 22, 2011

Here’s a cautionary tale about how hard it is to verify whether someone is who they say they are:

Syrian lesbian blogger is revealed conclusively to be a married man

Tom MacMaster’s wife has confirmed in an email to the Guardian that he is the real identity behind the Gay Girl in Damascus blog

Tom Mcmaster

Syrian lesbian blogger has been revealed to be Tom MacMaster, an American based in Scotland. Public domain

The mysterious identity of a young Arab lesbian blogger who was apparently kidnapped last week in Syria has been revealed conclusively to be a hoax. The blogs were written by not by a gay girl in Damascus, but a middle-aged American man based in Scotland.

The Guardian, frankly, has not covered itself in glory on this issue. The story itself makes no mention of the fact that the paper itself was duped. It was, after all, bloggers did the detective work that uncovered the hoax, not they. There’s this mea culpa, buried deep in a secondary story but it doesn’t apologise for misleading readers for more than a month:

The Guardian did not remove all the pictures until 6pm on Wednesday 8 June, 27 hours after Jelena Lecic first called the Guardian. It took too long for this to happen, for which we should apologise (see today’s Corrections and clarifications). The mitigating factors are that we first acted within four hours but compounded the error by putting up another wrong picture, albeit one that had been up on our website for a month, was unchallenged and was thought to have come directly from “Amina”. We know for a fact that the two pictures are of Jelena Lecic, but we didn’t know much else until thisevening. But we do know that when using social media – as we will continue to do as part of our journalism – the Guardian will have to redouble its efforts in establishing not just methods of verification, but of signalling to the reader the level of verification we think we can reasonably claim.

And even The Guardian hasn’t yet corrected itself: This piece is still up, uncorrected, and illustrating some more journalistic traits by not sourcing the story or expressing any “unconfirmed” thoughts:

image

The only suggestion that something is amiss is this at the end:

• This article was amended on 7 June 2011 and again on 8 June 2011 after complaints that photographs accompanying articles relating to Amina Araf showed someone other than the abducted blogger. The photographs have been removed pending investigation into the origins of the photographs and other matters relating to the blog.

Bottom line. Journalists have got to be smarter: smarter about the old things, such as dual sourcing, being sceptical about everything (a lesbian blogger in Damascus posting pictures of herself and using her real name? Even the author of the Guardian pieces was using a pseudonym—itself a no-no) and doing some basic legwork in trying to authenticate the person. And smart about new stuff: using the same tools the bloggers themselves used in exploring the real person behind it (those people could be forgiven for not having done this earlier: they, after all, are a community and accepted ‘her’ as one would in such a community.)

So what are those ‘new’ tools?

  • basic search. Do we know everything about this person? What kind of online footprint did they have before this all happened?
  • check photos’ origin. Not always easy, but worth doing. File names. Captions. Check out whether there’s any data hidden in the image. Image date.
  • IP addresses of emails and other communications.
  • Website/blog registration. Where? By whom?

These new tools need to be learned by journalists. And we need to learn them quickly.

We also need to find better ways to correct things when we get them wrong, and, frankly, to say sorry. Here are some other outlets that fell for it and have yet at the time of writing to either apologise or correct their stories:

WaPo: Elizabeth Flock, “‘Gay girl in Damascus’ Syrian blogger allegedly kidnapped,” June 7, 2011

CNN: “Will gays be ‘sacrificial lambs’ in Arab Spring?”

AP: Syrian-American gay blogger missing in Damascus – Timesonline.com- World-

NYT (since corrected, sort of, but the comments are intriguing. Readers are gullible too, although they might reasonably feel aggrieved that the NYT didn’t do its job in checking the facts): After Report of Disappearance, Questions About Syrian-American Blogger – NYTimes.com

More links:

Open door- The authentication of anonymous bloggers – Comment is free – The Guardian

Gay Girl in Damascus blog extracts- am I crazy- Maybe – World news – The Guardian

Syrian blogger Amina Abdallah kidnapped by armed men (example of The Guardian duped)

Wikipedia: Amina Abdallah Araf al Omari – Wikipedia, the free encyclopedia