Tag Archives: the New York Times

Libya’s Stuxnet?

A group of security professionals who have good credentials and strong links to the U.S. government have outlined a Stuxnet-type attack on Libyan infrastructure, according to a document released this week. But is the group outlining risks to regional stability, or is it advocating a cyber attack on Muammar Gadhafi?

The document, Project Cyber Dawn (PDF), was released on May 28 2011 by CSFI – the Cyber Security Forum Initiative, which describes itself as

non-profit organization headquartered in Omaha, NE and in Washington DC with a mission “to provide Cyber Warfare awareness, guidance, and security solutions through collaboration, education, volunteer work, and training to assist the US Government, US Military, Commercial Interests, and International Partners.”

CSFI now numbers about 7,500 members and an active LinkedIn forum.

To be clear, the document does not advocate anything. It merely highlights vulnerabilities, and details scenarios. It concludes, for example:

CSFI recommends the United States of America, its allies and international partners take the necessary steps toward helping normalizing Libya‘s cyber domain as a way to minimize possible future social and economic disruptions taking place through the Internet.

But before that it does say:

A cyber-attack would be among the easiest and most direct means to initially inject into the systems if unable to gain physical engineering attacks against the facility. Numerous client-side attack vectors exist that support payloads capable of compromising SCADA application platforms.

Elsewhere it says:

The area most vulnerable to a cyber-attack, which could impact not only the Libyan‘s prime source of income, but also the primary source of energy to the country, would be a focused attack on their petroleum refining facilities. Without refined products, it is difficult to fuel the trucks, tanks and planes needed to wage any effective war campaign.

The document itself is definitely worth a read; it doesn’t just focus on the cyberweapon side of things. And complicating matters is that one of the contributors to the report, a company called Unveillance, was hacked by a group called LulzSec around the time that the report was being finished. It’s not clear whether this affected release of the report.

Emails stolen from Unveillance and posted online by LulzSec indicate that two versions of the report were planned: one public one, linked to above, and one that would “go to staffers in the White House.” In another email a correspondent mentions an imminent briefing for Department of Defense officials on the report.

The only difference between the two reports that I can find are that the names of some SCADA equipment in Libya have been blacked out in the public version. The reports were being finalized when the hack took place–apparently in the second half of May.

Other commentators have suggested that we seem to have a group of security researchers and companies linked to the U.S. government apparently advocating what the U.S. government has, in its own report International Strategy for Cyberspace released May 17, would define as an act of cyberwar.

I guess I’m surprised by something else: That we have come, within a few short months, from thinking as Stuxnet as an outlier, as a sobering and somewhat shocking wake-up call to the power of the Internet as a vector for taking out supposedly resilient and well-defended machinery to having a public document airily discussing the exact same thing, only this time against non-nuclear infrastructure.

(The irony probably won’t escape some people that, according to a report in the New York Times in January, it was surrendered Libyan equipment that was used to test the effectiveness of Stuxnet before it was launched. I’m yet to be convinced that that was true, but it seems to be conventional wisdom these days.)

Frankly, I think we have to be really careful how we go about discussing these kinds of things. Yes, everything is at arm’s length in the sense that just because bodies such as CSFI may have photos of generals on their web-page, and their members talk about their reports going to the White House, doesn’t mean that their advice is snapped up.

But we’re at an odd point in the evolution of cyberwar presently, and I don’t think we have really come to terms with what we can do, what others can do, and the ramifications of that. Advocating taking out Libyan infrastructure with Stuxnet 2.0 may sound good, but it’s a road we need to think carefully about.

Newspapers’ Challenge

Newspapers have been scrambling to keep up with the world of blogs. In the process they’re actually destroying what sets them apart.

Take this piece from the International Herald Tribune. It’s in this morning’s revamped paper, under the byline of John Doyle—without further affiliation. It’s a good piece, except for a lame ending, but it contains at least four grammatical or spelling errors:

  • “the Scotland” twice (“Darren Fletcher was the Scotland’s best player”)
  • “England, under am Italian manager”
  • “There is a poetry of national longing and a poetic justice being behind the success of the Celtic countries.” Good luck making sense of that.

Now I just put this down to poor subbing. But the problem isn’t that.

The problem is that this piece is actually a blog post. Written by someone who doesn’t work for the NYT/IHT, as far as I can work out. At the bottom of the online version is this:

John Doyle is the TV Columnist for the Globe and Mail in Canada, writes regularly about soccer and his book about soccer, All The Rambling Boys of Pleasure, will be published in 2010.

So, first problem is: does a blog post count as a news article that can be published in a paper as such? And should the reader not be informed that

  • it’s a blog post, not a news piece (or analysis)
  • and that the author isn’t actually a NYT scribe?

The editing is not good, but it’s actually OK if it were a blog post, because it can be updated. Indeed, the online version has been: It’s longer, it makes sense, and the grammatical and spelling errors have gone. Indeed there’s a correction there that signifies the evolving nature of online writing.

My point is this: I paid for this newspaper. I thought I was paying for something that reflected the best of the IHT/NYT’s stable of writers. I didn’t expect to see the space filled with half-finished blog posts by people who may or may not actually be on the payroll. But I certainly didn’t expect to see the stuff pasted in without any further editing on the part of the IHT staff.

Don’t get me wrong. I still love the paper. And cuts mean that subs don’t have half the time they used to to edit this stuff.

But nevertheless, if newspapers are going to stand any chance at all, they really need to make sure that their material is so, so much better in terms of polish than their online counterparts, otherwise us readers will start to wonder why we’re paying for stuff offline that’s worse than the stuff we read online.

Is New Media Ready for Old Media?

image

I’m very excited by the fact that newspapers are beginning to carry content from the top five or so Web 2.0/tech sites. These blogs (the word no longer seems apt for what they do; Vindu Goel calls them ‘news sources’) have really evolved in the past three years and the quality of their coverage, particularly that of ReadWrite Web, has grown in leaps and bounds. Now it’s being carried by the New York Times.

A couple of nagging questions remain, however.

1) Is this old media eating new media, or new media eating the old? On the surface this is a big coup for folk like ReadWriteWeb—which didn’t really exist three years ago—but look more closely, and I suspect we may consider this kind of thing as the beginning of the acknowledgement by old media that they have ceded some important ground that they used to dominate. This, in short, marks the recognition of traditional media that theses news sources are, to all intents and purposes, news agencies that operate on a par with, and have the same values as, their own institutions.

2) Is new media ready for old media? I have a lot of respect for ReadWriteWeb, and most of the other tech sites included in this new direction. But they all need to recognise that by participating with old media they need to follow the same rules. There’s no room for conflicts of interest here: Even the NYT has reported on potential conflicts of interest for Om Malik and Michael Arrington (here’s a great piece from The Inquistr about the issue, via Steve Rubel’s shared Google Reader feed.)

The thing with conflicts of interest is that they’re tough. It’s hard to escape them. And it’s not enough to disclose them. You have, as a writer (let’s not say journalist here, it’s too loaded a word, like blogger), a duty to avoid conflicts of interest. Your commitment as a writer has to be to your reader. If your reader doesn’t believe that you’re writing free of prejudice or favor, then you’re a hack. And I don’t mean that in a nice way.

Which means you have to avoid not only all conflicts of interest, but appearances of conflict of interest. Your duty is not just to disclose conflicts of interest, and potential conflicts of interest, but to avoid them. If that means making less money, then tough.

So, for these ‘news sources’, the issue is going to become a more central one. Of course, the question will grow larger as these outfits move mainstream. But it may become more pressing for the carrier of the news, not for the provider: Who, say, accepts responsibility for errors and conflicts of interest? NYT and The Washington Post, or the carriers of the news? I’m sure there will be lots of caveats in the small print, but if material is on the NYT website, I think a reader would assume it reflects that paper’s ethical standards. If you’re in doubt, think of the recent United Airlines case.

That story’s reappearance started on Google News, and then was picked up by Income Securities Advisors, a financial information company, which was then picked up by Bloomberg. The technical error was Google’s, in finding it on a newspaper website and miscategorising it  as new, but the human error was in the ‘news source’, which saw it and then fired it off to their service, which is distributed via Bloomberg. Who is to blame for that mess? Well, the focus is all on Google, but to me the human element is the problem here, namely the reporter/writer who failed to double check the source/date etc of the piece itself.

The bottom line? It’s great that old media are recognising the quality of new media. What I want to see is this rising tide lifting all boats. Old media needs to not only grab at these news sources out of desperation but learn from their ingenuity, easy writing style and quality, and these outfits need—or at least some of them need—to take a cue from old media, take a look long and hard at themselves and ask themselves whether they could serve their readers better by shedding all conflicts—real, potential, or perceived—of interest.

The Size of the Future

(This is a guest post from a friend and long-time colleague, Robin Lubbock of WBUR, who will be contributing to Loose Wire Blog. You can read his blog, the Future of New(s), here.)

Why don’t you buy hard-back books? Either they are too expensive, or too big. They are too big to comfortably hold in one hand. So if you’re sitting in bed trying to read you’ve got to find a way to prop the thing up. Not a hurdle you can’t overcome. But an inconvenience.

Now think about the reader of the future. It’s the same issues. Size, readability, and cost. Any lessons you’ve learned from book reading, apply them to the electronic book and you’ll be imagining the electronic reader of the future.

So why hasn’t anyone made a good electronic book yet?

I was in Staples the other day and an assistant asked me what I wanted. I said “I want something about three or four times the size of an iPhone which I can use for browsing the Web when I’m in bed.” He said they had nothing like that, but he wanted one too.

So when I saw photos of a group of proposed readers in an article by John Markoff in the New York Times this weekend I thought my dream had come true.

But Markoff has a different view. He says he also used to think he was looking for a mid-sized reader for the Web. He went over some of the issues. But he reached the conclusion that although chip power means that you can’t get book performance out of a phone sized reader yet, people could be comfortable reading newspapers on a three-and-a-half-inch screen.

I took his implication to be that if people are happy with a small screen for reading newspapers and blogs, there will be no call for a mid-sized reader.

But I still want one. And I still believe the company that successfully develops a tool that has the same benefits as a novel, in usability, portability and ruggedness, will make a fortune.

Beyond Information Delivery

image

Newspaper delivery guy, Jakarta 2007

Over at Loose Wire sister site ten minutes I just wrote a review of ShifD, a new Web 2.0 clippings service that works, in theory, between desktop and mobile. More interesting, I reckoned (quoting myself; sorry), is that

it’s developed by two guys from within The New York Times’ R&D Lab, so you can’t help wondering where something like this might fit into the world of newspapers.

I’d love to see, for example, a five-digit code at the end of each news story in my newspaper/magazine that I could key into my phone and which would then store a copy of that story on my desktop. Would save me carrying a  magic marker around and then forgetting to clip it when I got home. Forget reading the NYT on my handheld: That ain’t going to happen to an old fogey like me; but I’d love a way to store what I liked somewhere useful so that I wouldn’t forget it.

Maybe this is how newspapers need to think of themselves. The medium is not really the problem: I want my newspaper in traditional form, because it’s tried and tested and works for me. But it doesn’t me I don’t want it in other forms too: For when I’m crushed on a subway, where flipping back the pages of the IHT might not be welcomed by my fellow sardines, or when I’m stuck without reading matter waiting for a friend (hi, Mark!)

And of course other people have their requirements too. The medium is going to always be different, depending on the individual. So it’s the content that is the constant, the one element you want to ensure your readers/users are able to access whenever and wherever they want.

And that doesn’t mean just reading it once. Nowadays, as information bombards us, we are more selective about what we read. Two points here: We get a lot of stuff thrown at us, so our ability to recall stuff is weaker. And, because our time is precious, when we do allocate it to something, we don’t want to feel that time is wasted or lost.

Ergo, the value comes in being able to help us users store information we’ve already decided to commit some of our scarce resources to so we can maximise our benefit from it. Whatever article or piece of information it is, chances are that if we bothered to read it, or read most of it, we’ll hope that we retain some of it for future usage.

That, I reckon is where something like ShifD comes into its own. But not if it’s a standalone service. Then it will merely fight with all other services out there that offer something similar. Its power will come if it can be harnessed with NYT so that however, whenever and wherever I dedicate some of my time to reading that august rag, I can be sure of a simple way, via my phone or desktop, of storing anything I read that I consider to be valuable and worth keeping.

In this sense, if you want to get all grand about it, the future of media lies not so much in the format and medium of delivery to the consumer but in the format and delivery of retention by the consumer. I as a consumer want the media provider to provide a way for me to maximise my utility from reading it, by recognising that reading something is not the end of the relationship with that article.

For me, the consumer, it’s the beginning: I’m hoping the piece will change my life sufficiently, from advice on buying new shoes to understanding the threat to my future from a Second Cold War. That, I suspect, is the challenge of today’s media.

Google Earth as Harbinger of Doom

Researchers are using Google Earth, the New York Times/IHT reports, to look for evidence of giant tsunamis, signs that the Earth has been hit by comets or asteroids more regularly, and more recently, than people thought:

This year the group started using Google Earth, a free source of satellite images, to search around the globe for chevrons, which they interpret as evidence of past giant tsunamis. Scores of such sites have turned up in Australia, Africa, Europe and the United States, including the Hudson River Valley and Long Island.

Chevrons are huge deposits of sediment that were once on the bottom of the ocean; they are as big as tower blocks and shaped like chevrons, the tip indicating the direction from which the tsunami came. 

I love the idea that academics use a tool like Google Earth to — possibly — puncture one of the greatest myths of the human era: that comets only come along once every 500,000 years.

Scientists in the working group say the evidence for such impacts during the last 10,000 years, known as the Holocene epoch, is strong enough to overturn current estimates of how often the Earth suffers a violent impact on the order of a 10-megaton explosion. Instead of once in 500,000 to one million years, as astronomers now calculate, catastrophic impacts could happen every few thousand years.

There are a couple of other quirks to this story. The working group of misfits is cross-disciplinary — there’s a specialist on the structural analysis of myth in there — but only formed when they bumped into each other at a conference. How more efficient it would have been had they been blogging; they might have found each other earlier. (Perhaps they met before the blogging age; there’s a piece on the subject here from 2000.) 

The second quirk for me is that the mythologist (actually Bruce Masse calls himself an enviromental archaeologist) reckons he can pinpoint the exact date of the comet which created the Burckle Crater between Madagascar and Australia using local legends: 

Masse analyzed 175 flood myths from around the world, and tried to relate them to known and accurately dated natural events like solar eclipses and volcanic eruptions. Among other evidence, he said, 14 flood myths specifically mention a full solar eclipse, which could have been the one that occurred in May 2807 B.C.

I love the idea of myths; I see them as a kind of early Internet, a way of dispersing knowledge using the most efficient tools (in those days, this meant stories and word of mouth.) We tend to think of myths as superstition and scare mongering, but in fact in many cases they are the few grains of wisdom that get passed on from generation to generation.  They often get contorted in the telling, the original purpose — to warn — sometimes getting lost. 

Like the Moken sea gypsies of the Andaman Sea, most of whom were spared the 2004 tsunami because they “knew from their tribal lore that this was a warning sign to flee to higher ground”, according to Reuters. On the Acehnese island of Simeulue, similar lore, dating back to the 1907 tsunami, tells islanders that “if the land is shaking and shoreline is drained abnormally, they have to go to very high land.” Only seven people out of 80,000 islanders died. 

Based on this, the idea of trying to pin down the comets, the craters and the chevrons by exploring local myth makes a lot of sense. I like the idea that is being done alongside using something as modern, and as freely available, as Google Earth. I guess I’m just not happy about the implications for us current planet dwellers. 

Source: Ancient crash, epic wave – Health & Science – International Herald Tribune (graphic here)

Domain Names as a Tool for Political Control?

A case that addresses all sorts of issues, and, at the same time, none of them. Reuters.com reported a few days ago that

The authorities in Kazakhstan, angered by a British comedian’s satirical portrayal of a boorish, sexist and racist Kazakh television reporter (Borat Sagdiyev ), have pulled the plug on his alter ego’s Web site. Sacha Baron Cohen plays Borat in his “Da Ali G Show” and last month he used the character’s Web site www.borat.kz to respond sarcastically to legal threats from the Central Asian state’s Foreign Ministry.

A government-appointed organization regulating Web sites that end in the .kz domain name for Kazakhstan confirmed on Tuesday it had suspended Cohen’s site. “We’ve done this so he can’t badmouth Kazakhstan under the .kz domain name,” Nurlan Isin, President of the Association of Kazakh IT Companies, told Reuters. “He can go and do whatever he wants at other domains.” Isin said the borat.kz Web site had broken new rules on all .kz sites maintaining two computer servers in Kazakhstan and had registered false names for its administrators.

Actually Borat has been around for a while, saying these things, as have Kazakh officials been trying to put the record straight about their country, but it appears to be a U.S. series, a movie in the works and an appearance at the MTV Music Awards that has been the catalyst for the Kazakhs to take action:

Cohen, as Borat, hosted the MTV Europe Music Awards in Lisbon last month and described shooting dogs for fun and said his wife could not leave Kazakhstan as she was a woman. Afterwards, Kazakhstan’s Foreign Ministry said it could not rule out that he was under “political orders” to denigrate Kazakhstan’s name and threatened to sue him.

Kazakhstan has also hired two PR firms and, according to the London Times, earlier this month published a four-page ad in the New York Times. Cohen must be lapping up the free publicity.

Reporters without Borders are upset about this abuse of the country domain name , linking it to the alleged stage-managed closure of opposition Kazakh web site Navi.kz, calling it censorship and beyond the competence of bodies that manage domain names:

In this way, it infringes the principles set out by ICANN, which requires that the management of the ccTLDs should be fair and non discriminatory.

Oddly, a piece in today’s IHT (which also, intriguingly, carries a 4-page ad for Kazakhstan; the story originally appeared in Wednesday’s European edition) quotes the Kazakh foreign ministry spokesman, Yerzhan Ashikbayev, as denying it was the government that had blocked the site. Whoever made the decision, this isn’t exactly censorship. Borat just moves his website here, and loves the attention. That’s not to say there aren’t plenty of examples of government crackdowns on press freedom, including using the the Kazakh network information centre (KazNIC) to harass the opposition website Navi into changing domain name — twice. It can now be found at Mizinov.net. If Borat’s case does nothing else, it might raise public concern about political manipulation of those last two letters after the dot.

Enter Kinja, The New Blog Directory

Here’s another blog directory, going live today (it’s just a graphic at the time of writing this). Is it going to be different, or is it hype?

The New York Times today says Kinja, “automatically compiles digests of blogs covering subject areas like politics and baseball. Short excerpts from the blogs are included, with links to the complete entries on the individual blog sites.” Users can sign up for a free account, enter the addresses of their favorite blogs and generate a digest.

Those behind Kinja include Nick Denton, “whose small blog-publishing empire includes the New York gossip site Gawker” according to The Times, and Meg Hourihan, Kinja’s project director and a founder of the blog publishing service Blogger. (Her blog is here.)

Kinja users can make their customized digests public, NYT says, and that the best digests would be promoted at the site, making the users ”part of the editorial team.”

There’s definitely room for improvement in the way blogs, and RSS feeds, are pulled together for the reader. Reading blogs, even in RSS form, becomes quite a chore, and while there are some great blogs out there, the tendency of the most interesting ones to cover a very broad spectrum of topics makes sifting through sometimes more time-consuming than one would like. Here’s an interesting discussion about what Kinja could be, and what people are looking for.  

The Digital Fallout Of Journalistic Plagiarism and Fakery

How do you correct the Internet?

All these reports of plagiarism and fakery in U.S. journalism — at least 10, according to the New York Times — raise a question I haven’t seen addressed elsewhere. What should newspapers and other publications which have carried the reports do about setting the record straight?

A USA Today report says of disgraced reporter Jack Kelley that it has “found strong evidence that Kelley fabricated substantial portions of at least eight major stories, lifted nearly two dozen quotes or other material from competing publications, lied in speeches he gave for the newspaper and conspired to mislead those investigating his work.”

Here’s a taster: ”An extensive examination of about 100 of the 720 stories uncovered evidence that found Kelley’s journalistic sins were sweeping and substantial. The evidence strongly contradicted Kelley’s published accounts that he spent a night with Egyptian terrorists in 1997; met a vigilante Jewish settler named Avi Shapiro in 2001; watched a Pakistani student unfold a picture of the Sears Tower and say, “This one is mine,” in 2001; visited a suspected terrorist crossing point on the Pakistan-Afghanistan border in 2002; interviewed the daughter of an Iraqi general in 2003; or went on a high-speed hunt for Osama bin Laden in 2003.”

That’s quite a lot of correcting to do. USA Today says it will withdraw all prize entries it made on Kelley’s behalf (including five Pulitzer nominations) and “will flag stories of concern in its online archive”.

But is that enough? Correcting the “online archive” would have to include all secondary databases such as Factiva (part-owned by Dow Jones, publisher of the Far Eastern Economic Review, The Wall Street Journal, and my employer; There are 1,495 USA Today stories with Jack Kelley’s name either on them or in them prior to this year). Strictly speaking, it should also include all Internet copies of those stories on the Internet (a Google search of [“Jack Kelly” and “USA Today”] threw up 3,470 matches; while many of those are accounts of the plagiarism charge, many precede that). And what about blog references to Kelley’s stories?

I’ll take an example. In 2001 Jack Kelley wrote about a vigilante Jewish settler named Avi Shapiro in 2001. According to USA Today, this was one of the stories where “the evidence strongly contradicted Kelley’s published accounts”. That story has been posted on dozens of websites (I counted 60). Who’s going to correct, or raise flags on all those?

Then there’s the doubt. With Kelley claiming, according to the USA Today report, that he was “being set up”, there’s no way that even a serious investigation by the paper (which included a eight-person team, a 20-hour interview with Kelly by three veteran journalists from outside the company and extensive use of plagiarism-detection software) is going to confirm with any sense of certainty what was faked or plagiarised. So what, exactly, do you correct? Do you delete his whole oeuvre?

It’s a tough one, and perhaps a sober reminder for journalists (and bloggers) using the Internet as a source that it’s not just emails that appear to come from our bank that we need to double check. Is there a technological solution to this? A digital watermark or trace that can allow someone to instantly correct a story, or at least notify those hosting the material that there’s a problem?

Can Software End Plagiarism?

With all this gadgetry, you’d think that plagiarism was a thing of the past.

OK, it wasn’t plagiarism, more like fiction, but the point is the same: Watching Shattered Glass, the movie about fabulating New Republic ‘journalist’ Stephen Glass, the other night, I couldn’t help wondering why no one had picked up on his lies earlier. I mean this was 1998, so the Internet existed, search engines existed. (The only solution I could reach was that the people who read The New Republic were not that bright, but that can’t be right, it’s the inflight magazine of Air Force One.)

Anyway, according to Editor & Publisher, the technology exists to check plagiarism quite easily. The problem is that newspapers and other publications don’t want to use it. John Barrie, president of iParadigms LLC, is quoted as saying (via the daily news Weblog of the USC Annenberg Online Journalism Review) that newspapers generally don’t want to use his online detection program to prevent plagiarism because they don’t want to admit there is a problem, reports Editor & Publisher. The software compares documents with databases containing news sources and encyclopedias.

So far the only journalistic use of Barrie’s software has been in revealing that Central Connecticut State University’s president, Richard Judd, plagiarized from several sources (including The New York Times) for an opinion piece he wrote for The Hartford Courant. Barrie is quoted as saying tha ombudsmen and public editors — a common feature nowadays at U.S. papers — are not enough. “It’s essentially as good as doing nothing,” he said. He believes that just having iThenticate around would deter writers from copying material because they know their work will be vigorously checked. 

The company’s website indicates the work they do is mainly for student essays, where the software is “now deterring plagiarism for nearly 6 million students and educators worldwide”. Editor & Publisher says iParadigms was founded in 1996 as a computer program that UC Berkeley researchers used to inspect undergraduate research papers. Folk wanting to use the software pay a $1,000 licensing fee and $10 per page. They then send the document to iThenticate and receive a report within minutes, detailing what (if anything) has been plagiarized and where it originally came from.