Tag Archives: New York Times

Media: Reducing Story Production Waste

In trying to change news to match the new realities of the Interwebs, media professionals are still somewhat stuck in old ways of doing things. One is to fail to address the massive waste in news production–or at least parts of it.

So what potential waste is there? Well, these are the obvious ones:

  • Gathering: Reporters/trips/stories per trip/matching other outlets
  • Editing: The number of people who look at a story before it is published/time a story takes to work through the system

I’m more interested, however, in the amount of waste from material generated. Think of it like this:


  • Story idea
  • Logistics (travel/communications/reporting tools)
  • Interviews, multimedia and other material generated


  • Story
  • Photo
  • ?Video


  • All content not used in story (some may be reused, eg photos, sidebars but rarely)
  • All content used that’s not reused/repurposed.

This seems to me to be extremely wasteful in an industry in so much pain. Any other industry wouldn’t just look to pare back on factors of production but to also minimize the waste generated.

Any journalist will know just how much we’re talking about. Say you interview five people for a story. Even a stock market report is going to involve five interviews of at least five minutes. At about 150 words a minute that’s nearly 4,000 words. The stock market report itself is going to be about 500 words, maybe 600. That’s a 3,600 words–say 2,500, allowing for the reporter’s questions, and some backchat–gone to waste. For 500 words produced we had to throw out 2,000.

Yes, I know it’s not a very scientific way of doing things, but you get my point. Most journalists only write down the quotes they need for the story, and many will delete the notes they’ve taken if they’re typing them on the screen in the same document they’re writing the story on. So all that material is wasted.

A good reporter will keep the good stuff, even if it’s not used in the story, and will be able to find it again. But I don’t know of any editorial system that helps them do that–say, by tagging or indexing the material–let alone to make that available to other reporters on the same beat.

This is where I think media needs to change most. It needs to assume that all material gathered by journalists, through interviews, research, even browsing, is potentially content. It needs to help journalists organise this material for research, but, more importantly to generate new content from.

Take this little nugget, for example, in a New York Times, story, Nokia Unveils a New Smartphone, but Not a Product of Its Microsoft Deal – NYTimes.com: The reporter writes of the interviewee, Nokia’s new chief executive Stephen Elop: ”During the interview, he used the words “innovate” or “innovation” 24 times.”

I really like that. It really captures something that quotes alone don’t. We would call it “interview metadata”–information about the interview that is not actual quotes or color but significant, nonetheless.

Whether the journalist decided to count them early on during the interview, or took such good notes a keyword search or manual count after was enough, or whether he transcribed the whole thing in his hotel room later, I don’t know. (A quibble: I would have put the length of the interview in that sentence, rather than an earlier one, because it lends the data some context. Or one could include the total number of words in the interview, or compare it with another word, such as “tradition” or something. Even better create a word cloud out of the whole interview.)(Update: here’s another good NYT use of metadata, this time the frequency of words in graduation speeches: Words Used in 40 Commencement Speeches – Class of 2011 – Interactive Feature – NYTimes.com)

The point? Elop is an executive, and he has a message. He wants to convey the message, and so he is using carefully chosen words to not only ensure they’re in any quote that’s used, but also to subliminally convey to the journalist the angle he hopes the journalist will adopt. By taking the interview metadata and presenting it separately, that objective, and strategy, will be well illustrated to the reader.

And, of course, you’ve reduced the story production wastage, or SPW, significantly.

Media can help this process by developing tools and offering services to maximise the usefulness of material gathered during research and interviews, and to reduce the time a journalist spends on marshalling this material.


  • Transcription services, where journalists can send a recording and get the material back within the hour (or even as the interview is conducted, if the technology is available).
  • Push some of the content production to the journalist: let them experiment with wordclouds and other data visualization tools, not only to create end product but to explore the metadata of what they’ve produced.
  • Explore and provided content research and gathering tools (such as Evernote) to journalists so they don’t have to mess around too much to create stuff drawing on existing material they’ve gathered, for the story they’re working on, from previous research and interviews, and, hopefully, from that of colleagues.

A lot of my time training journalists these days is in these kinds of tools, and I’m always surprised at how little they are made use of. That needs to change if media is to find a way to make more use of the data it gathers in the process of creating stories.

A pale white man shows us what journalism is

My weekly Loose Wire Service column.

Is the Internet replacing journalism?

It’s a question that popped up as I gazed at the blurred, distorted web-stream of a press conference from London by the founder of WikiLeaks, a website designed to “protect whistleblowers, journalists and activists who have sensitive materials to communicate to the public”.

On the podium there’s Julian Assange. You can’t make a guy like this up. White haired, articulate and defensive, aloof and grungy, specific and then sweepingly angry. Fascinating. In a world of people obsessed by the shininess of their iPhones, Assange is either a throwback to the past or a gulf of fresh air.

WikiLeaks, which has been around for a few years but has, with the release of mounds of classified data about the Afghan War, come center stage.

Assange doesn’t mince his words. He shrugs off questions he doesn’t like by pointing his face elsewhere and saying “I don’t find that question interesting.” He berates journalists for not doing their job — never

something to endear an interviewee to the writer.
But in some ways he’s right. We haven’t been doing our job. We’ve not chased down enough stories, put enough bad guys behind bars (celebrities don’t really count.) His broadsides may be more blunderbuss than surgical strike, but he does have a point. Journalism is a funny game. And it’s changing.

Asked why he chose to work with three major news outlets to release the Afghan data, he said it was the only way to get heard. He pointed out that he’d put out masses of interesting leaks on spending on the Afghan war previously and hardly a single journalist had picked it up.

Hence the — inspired — notion of creating a bit of noise around the material this time around. After all, any journalist can tell you the value of the material is less intrinsic than extrinsic: Who else is looking for it, who else has got it, and if so can we publish it before them.

Sad but true. We media tend to only value something if a competitor does. A bit like kids in the schoolyard. By giving it to three major outlets — New York Times, The Guardian, Der Spiegel — Assange ensured there was not only a triple splash but also the matchers from their competitors.

So Assange is right. But that’s always been like that. Assange is part of — and has identified — a much deeper trend that may be more significant than all the hand-wringing about the future of the media.

You see, we’ve been looking at media at something that just needs a leg-up. We readily admit the business model of the media is imploding.

But very little discussion of journalism centers on whether journalism itself might be broken. Assange — and others – believe it is.

The argument goes like this.

The model whereby media made a lot of money as monopolistic enterprises — fleecing advertisers at one end, asking subscribers to pay out at the other, keeping a death grip on the spigot of public, official or company information in the middle — has gone. We know that.

But what we don’t perhaps realize is that the Internet itself has changed the way that information moves around. I’m not just talking about one person saying something on Twitter, and everyone else online reporting it.

I’m talking about what news is. We journalists define news in an odd way — as I said above, we attach value to it based on how others value it, meaning that we tend to see news as a kind of product to grab.

The Internet has changed that. It’s turned news into some more amorphous, that can be assembled from many parts.

Assange and his colleagues at WikiLeaks don’t just act as a clearing house for leaked data. They add extraordinary value to it.

Don’t believe me? Read a piece in The New Yorker in June, about the months spent on cracking the code on, and then editing video shot in Iraq.

In a more modest way this is being done every day by bloggers and folk online, who build news out of small parts they piece together —some data here, a report there, a graphic to make sense of it. None of these separate parts might be considered news, but they come together to make it so.

Assange calls WikiLeaks a stateless news organization. Dave Winer, an Internet guru, points out that this pretty much is what the blogosphere is as well. And he’s right. WikiLeaks works based on donations and collaborative effort. Crowd-sourcing, if you will.

I agree with all this, and I think it’s great. This is happening in lots of interesting places — such as Indonesia, where social media has mobilized public opinion in ways that traditional media has failed.

But what of journalism, then?

Jeff Jarvis, a future-of-media pundit, asked the editor of The Guardian, one of the three papers that WikiLeak gave the data too first, whether The Guardian should have been doing the digging.

He said no; his reporters add value by analyzing it. “I think the Afghan leaks make the case for journalism,” Alan Rusbridger told Jarvis. “We had the people and expertise to make sense of it.”

That’s true. As far as it goes. I tell my students, editors, colleagues, anyone who will listen, that our future lies not so much in reporting first but adding sense first. And no question, The Guardian has done some great stuff with the data. But this is a sad admission of failure — of The Guardian, of reporting, of our profession.

We should be looking at WikiLeaks and learning whatever lessons we can from it. WikiLeaks’ genius is manifold: It has somehow found a way to persuade people, at great risk to themselves, to send it reams of secrets. The WikiLeaks people do this by taking that data seriously, but they also maintain a healthy paranoia about everyone — including themselves — which ensures that sources are protected.

Then they work on adding value to that data. Rusbridger’s comments are, frankly, patronizing about WikiLeaks’ role in this and previous episodes.

We journalists need to go back to our drawing boards and think hard about how WikiLeaks and the Warholesque Assange have managed to not only shake up governments, but our industry, by leveraging the disparate and motivated forces of the Internet.

We could start by redefining the base currency of our profession — what news, what a scoop, what an exclusive is. Maybe it’s the small pieces around us, joined together.

Measured vs Spewed: The New Reviewers

(A podcast of this can be downloaded here.)

The walls of elite reviewers come tumbling down, and it’s not pretty. But is it what we want?

I belatedly stumbled upon this piece in The Observer by Rachel Cooke on a new spat between editors, reviewers and blogger reviewers, and not much of it is new. There’s the usual stuff about how bloggers are anonymous (or at least pseudonymous) and the usual tale of how one writer got her spouse to write an anonymous positive review on Amazon (why hasn’t mine done one yet!) to balance against all the negative stuff.

As Tony Hung points out, the piece gets rather elitist by the end, although I have to like her description of Nick Hornby, a great writer and careful reviewer: “[H]is words are measured, rather than spewed, out; because he is a good critic, and an experienced one; and because he can write.” Measured vs spewed is a good way of putting it. It’s also a good way of thinking about the two very different beasts we’re talking about here.

There are two different kinds of reviews, serving two different purposes. The point here is that there are two different kinds of purposes here. If Nick Hornby likes a book, I may well buy it because I like Nick Hornby’s work. Of course, I’ll also enjoy his review as a piece of writing in its own right; chances are he’s put a huge amount of effort into it. It’s all about who writes the review. (And we need to always keep in the back of our mind the tendency, noted down the years in Private Eye, that reviewers in big name newspapers often seem to end up reviewing books by people they know, often rather well. It’s a small world, the literary one.)

If I’m reading about a book on Amazon I’m less picky about who and more about how many, and what. If 233 out of 300 people like a book on Amazon I am going to be more impressed than if 233 out of 300 people hated it. I’ll scan the reviews to see whether there are any common themes among the readers’ bouquets or brickbats. Take Bill Bryon’s latest, for example: Most reviewers loved it, and quite a few fell out of their chair reading it. Take Graeme Hunter, who writes: “Bill has managed yet another work of ‘laugh-out-loud’ ramblings, but this is his first to make me cry at the end.” That tells me that regular readers of Bryson are probably going to like it. But not everyone. One reviewer, J. Lancaster, wrote that while he was a big fan, he found the book “slow and ponderous and lacks the wit, insight and observation of, well, all his other books.” That tells me something too: Don’t expect to be dazzled all the way through.

Now note that these reviewers have attached their real names. They’re not anonymous, pseudonymous or fabrications of someone’s imagination or close family. Their writings may not be that literary, but that’s not what I’m looking for in an Amazon review. With Amazon, I’m looking to mine the wisdom of the crowd — the aggregate opinion of a group of people all with the same interest as myself in mind: not wasting our money on a dud book.

Compare what they write to the two snippets of blurb from big name publications on the same Amazon page:

New York Times
‘Outlandishly and improbably entertaining…inevitably [I] would
be reduced to body-racking, tear-inducing, de-couching laughter.’

Literary Review
‘Always witty and sometimes hilarious…wonderfully funny and

Useful, but not much more useful than the Amazon reviews.

The bottom line is that reading a review on Amazon is like polling a cross section of other people who’ve read the same book. It’s like being able to walk around a bookshop tapping strangers on the shoulder and asking what they think of the book you have in your hand. Their responses are likely to be as spewed as an Amazon or blog review. But it doesn’t lessen their value. If all you want to know is whether the book is worth reading, you may be better served than some ‘measured’, self-conscious professional review.

This is the difference that the Internet brings us. It’s not either/or, it’s about consumers having more information about what they’re buying, and having a chance to give feedback on what they have bought. That all this is a little unnerving to those writers used to being far removed from the book-buying mob, and the pally/bitchy relationship they have with reviewers should come as no surprise. My advice: get used to it.

PS I spewed this piece out in 27 minutes. (You can tell – Ed)

Internet Voting: A Minority Report?

A reader kindly pointed out this New York Times piece on the Internet voting story I posted yesterday, which highlights some other aspects of the case.

While four members of a panel asked to review the SERVE program — designed to allow Americans overseas to vote over the Net — said it was insecure and should be abandoned, the NYT quoted Accenture, the main contractor, as saying the researchers drew unwarranted conclusions about future plans for the voting project. “We are doing a small, controlled experiment,” Meg McLauglin, president of Accenture eDemocracy Services, was quoted as saying.

Another side to this pointed out by the loose wire reader: Accenture says that the four researchers were a minority voice, and that five of the six others ‘would not recommend shutting down the program’. One of the other outside reviewers, Ted Selker, a professor at the Massachusetts Institute of Technology, disagreed with the report, and was quoted by the NYT as saying it reflected the professional paranoia of security researchers. “That’s their job,” he said. In response one of the four naysayers noted that they were the only members of the group who attended both of the three-day briefings about the system.

The reader also makes this observation: “One of their complaints is that the Internet is inherently unsafe, which may be true. I don’t believe that the US Postal Service (which is the current method for transmitting absentee ballots) is inherently safe either. Ever seen a bag of mail sitting in a building lobby waiting for pickup? I have.” Fair enough, but unless the bag contained ballots (something I have seen in, er, less security conscious democracies), I don’t think it’s a fair comparison, since a few tampered or misdirected ballots would not undermine the integrity of the election.

The security compromises in SERVE are likely to be at the server level, where hackers could either alter delivered votes, mimic voter activity, or disrupt legitimate voters from placing their ballot. This could be done on a scale that would undermine the integrity, or at least could be believed to do so. Remember: In an electronic election (where no parallel paper ballot is collected), a claim of largescale tampering is enough to undermine confidence in the result.

My tupennies’ worth? Although the E stands for experiment, I don’t see SERVE as a ‘controlled experiment’. The NYT says the program will be introduced “in the next few weeks” and covers seven states, and a possible 100,000 people this year. That doesn’t sound like an experiment to me. Maybe I’m missing something here, but I don’t really see how you can conduct an experiment in a live voting environment. What happens if there’s a suggestion the system has been compromised, either during or after the vote? I always thought that voting systems were either approved, credible and acceptable or not in public use. Of course it’s fine to have an ‘experiment’ where the only experimental part is, say, the user-aspects of the voting process. But security can surely never be part of an experiment in a live voting situation.

Security experts are paid to be skeptical. If they raise a warning flag as big as this, I think they should be listened to.