Tag Archives: Massachusetts

Social Media and Politics: Truthiness and Astroturfing

By Jeremy Wagstaff

(this is a column I wrote back in November. I’m repeating it here because of connections to astroturing in the HBGary/Anonymous case.)

Just how social is social media? By which I mean: Can we trust it as a measure of what people think, what they may buy, how they may vote? Or is it as easy a place to manipulate as the real world.

The answers to these questions aren’t of academic interest only. They go right to the heart of what may be our future. More and more of our world is online. And more and more of our online world is social media: A quarter of web pages viewed in the U.S. are on Facebook. So it’s not been lost on those who care about such things that a) what we say online may add up to be a useful predictor of what we may do at the shops, the movies, at the polling booth. And b) that social media is a worthwhile place to try to manipulate what we think, and what we do at the shops, the movies—and at the ballot box.

There is plenty of evidence supporting the former. Counting the number of followers a candidate has on Facebook, for example, is apparently a pretty good indicator of whether they’ll do well at the ballot box. The Daily Beast set up something called the Oracle which scanned 40,000 websites—including Twitter—to measure whether comments on candidates in the recent U.S. elections were positive, negative, neutral or mixed. It predicted 36 out of 37 Senate races and 29 out of 30 Governors’ races and nearly 98% of the House races. That’s pretty good.

Dan Zarrella, a self-styled social media scientist, counted the followers of the twitter feeds of 30 senate, house and governor races and found that in 71% of the races, the candidate with the most Twitter followers was ahead in the polls. And Facebook found that candidates with more Facebook fans than their opponents won 74% of House races, and 81% of Senate races. More than 12 million people used the “I Voted” button this year, more than double that in 2008.

Why is this interesting? Well, social media, it turns out, is quite a different beast to even recent phenomena such as blogs. Social media, it turns out, really is social, in that more than previous Internet methods of communication, it reflects the views of the people using it. It is, one might say, democratic.

A study by researchers from the Technical University of Munich of the 2009 federal parliamentary elections in Germany, for example, revealed that, in contrast to the bulletin boards and blogs of the past, Twitter was reflective of the way Germans voted. Unlike bulletin boards and blogs, they wrote, “heavy users were unable to impose their political sentiment on the discussion.” The large number of participants, they found, “make the information stream as a whole more representative of the electorate.”

In other words, social media is as much a battleground for hearts and minds as the rest of the world. Even more so, perhaps, because it’s easier to reach people. Forget knocking on doors or holding rallies: Just build a Facebook page or tweet.

And, maybe, hire some political operators to build a fake movement, aka astroturfing?

Astroturfing, for those not familiar with the term, is the opposite of grassroots. If you lack the support of ordinary people, or don’t have time to get it, you can still fake it. Just make it look like you’ve got grassroots support. Since the term was coined in the mid 1980s it’s become popular activity by marketers, political operators and governments (think Chinese 50-cent blogging army). Astroturfing, in short, allows a politician to seem a lot more popular than he really is by paying folk to say how great he is.

Whether social media is ripe for astroturfing isn’t clear. On one hand, we know that the Internet is full of fakery and flummery: Just because your inbox is no longer full of spam doesn’t mean the Internet isn’t full of it—87%, according to the latest figures from MessageLabs. You don’t see it because the filters are getting better at keeping it away from you. Twitter, by contrast, is much less spammy: the latest figures from Twitter suggest that after some tweaks earlier this year the percentage of unwanted messages on the service is about 1%.

So Twitter isn’t spammy, and it broadly reflects the electorate. But can it be gamed?

We already know that Twitter can spread an idea, or meme, rapidly—only four hops are needed before more or less everyone on Twitter sees it. In late 2009 Google unveiled a new product: Real time search. This meant that, atop the usual results to a search, Google would throw in the latest matches from the real time web—in other words, Twitter and its ilk. So getting your tweets up there would be valuable if, say, you were a political operator and you wanted people to hear good things about your candidate, or bad things about your rival. But were people doing this? Two researchers from Wellesley College in Massachusetts wondered.

Panagiotis Takis Metaxas and Eni Mustafaraj studied the local senate race and found that they were. They looked at 185,000 Twitter messages which mentioned the two competing candidates and found that there was plenty of astroturfing going on—where political supporters were creating fake accounts and repeating each other’s messages, and sending them to likely sympathizers, in the hope of their messages hitting the mainstream.

The researchers found one group, apparently linked to an Iowa Republican group, was sending out one tweet a second linking to websites “exposing” their rival’s missteps and misstatements. Overall, the message they sent reached more than 60,000 users. The researchers concluded that “the fact that a few minutes of work, using automated scripts and exploiting the open architecture of social networks such as twitter, makes possible reaching a large audience for free…raises concerns about the deliberate exploitation of the medium.”

The point here is not merely that you’re propagating a point of view. That’s just spam. But by setting up fake Twitter accounts and tweeting  and then repeating these messages, you’re creating the illusion that these views are widespread. We may ignore the first Twitter message we see exposing these views and linking to a website, but will we ignore the second or the third?

This discovery of Twitter astroturfing in one race has prompted researchers at Indiana University to set up a tool they call Truthy—after comedian Stephen Colbert’s term to describe something that someone knows intuitively from the gut—irrespective of evidence, logic or the facts. Their tool has exposed other similar attacks which, while not explosive in terms of growth, are, they wrote in an accompanying paper,  “nevertheless clear examples of coordinated attempts to deceive Twitter users.” And, they point out, the danger with these Twitter messages is that unless they’re caught early, “once one of these attempts is successful at gaining the attention of the community, it will quickly become indistinguishable from an organic meme.”

This is all interesting, for several reasons. First off, it’s only in the past few months that we’ve woken up to what political operators seem to be doing on Twitter. Secondly, while none of these cases achieves viral levels, the relative ease with which these campaigns can be launched suggests that a lot more people will try them out. Thirdly, what does this tell us about the future of political manipulation in social media?

I don’t know, but it’s naïve to think that this is just an American thing. Or a ‘what do you expect in a thriving democracy?’ thing. Less democratically minded organizations and governments are becoming increasingly sophisticated about the way they use the Internet to control and influence public opinion. Evgeny Morozov points to the Lebanon’s Hezbollah, “whose suave manipulation of cyberspace was on display during the 2006 war with Israel”; my journalist friends in Afghanistan say the Taliban are more sophisticated about using the Internet than the Karzai government or NATO.

The good news is that researchers are pushing Twitter to improve their spam catching tools to stop this kind of thing from getting out of hand. But I guess the bigger lesson is this: While social media is an unprecedented window on, and reflection of, the populace, it is also an unprecedented opportunity for shysters, snake oil salesmen and political operators to manipulate what we think we know.

It may be a great channel for the truth, but truthiness may also be one step behind.

Social Media and Politics: Truthiness and Astroturfing

(This is a longer version of my syndicated newspaper column)

By Jeremy Wagstaff

Just how social is social media? By which I mean: Can we trust it as a measure of what people think, what they may buy, how they may vote? Or is it as easy a place to manipulate as the real world?

The answers to these questions aren’t of academic interest only. They go right to the heart of what may be our future. More and more of our world is online. And more and more of our online world is social media: A quarter of web pages viewed in the U.S. are on Facebook. So it’s not been lost on those who care about such things that a) what we say online may add up to be a useful predictor of what we may do at the shops, the movies, at the polling booth. And b) that social media is a worthwhile place to try to manipulate what we think, and what we do at the shops, the movies—and at the ballot box.

There is plenty of evidence supporting the former. Counting the number of followers a candidate has on Facebook, for example, is apparently a pretty good indicator of whether they’ll do well at the ballot box. The Daily Beast set up something called the Oracle which scanned 40,000 websites—including Twitter—to measure whether comments on candidates in the recent U.S. elections were positive, negative, neutral or mixed. It predicted 36 out of 37 Senate races and 29 out of 30 Governors’ races and nearly 98% of the House races. That’s pretty good.

Dan Zarrella, a self-styled social media scientist, counted the followers of the twitter feeds of 30 senate, house and governor races and found that in 71% of the races, the candidate with the most Twitter followers was ahead in the polls. And Facebook found that candidates with more Facebook fans than their opponents won 74% of House races, and 81% of Senate races. More than 12 million people used the “I Voted” button this year, more than double that in 2008.

Why is this interesting? Well, social media, it turns out, is quite a different beast to even recent phenomena such as blogs. Social media, it turns out, really is social, in that more than previous Internet methods of communication, it reflects the views of the people using it. It is, one might say, democratic.

A study by researchers from the Technical University of Munich of the 2009 federal parliamentary elections in Germany, for example, revealed that, in contrast to the bulletin boards and blogs of the past, Twitter was reflective of the way Germans voted. Unlike bulletin boards and blogs, they wrote, “heavy users were unable to impose their political sentiment on the discussion.” The large number of participants, they found, “make the information stream as a whole more representative of the electorate.”

In other words, social media is as much a battleground for hearts and minds as the rest of the world. Even more so, perhaps, because it’s easier to reach people. Forget knocking on doors or holding rallies: Just build a Facebook page or tweet.

And, maybe, hire some political operators to build a fake movement, aka astroturfing?

Astroturfing, for those not familiar with the term, is the opposite of grassroots. If you lack the support of ordinary people, or don’t have time to get it, you can still fake it. Just make it look like you’ve got grassroots support. Since the term was coined in the mid 1980s it’s become popular activity by marketers, political operators and governments (think Chinese 50-cent blogging army). Astroturfing, in short, allows a politician to seem a lot more popular than he really is by paying folk to say how great he is.

Whether social media is ripe for astroturfing isn’t clear. On one hand, we know that the Internet is full of fakery and flummery: Just because your inbox is no longer full of spam doesn’t mean the Internet isn’t full of it—87%, according to the latest figures from MessageLabs. You don’t see it because the filters are getting better at keeping it away from you. Twitter, by contrast, is much less spammy: the latest figures from Twitter suggest that after some tweaks earlier this year the percentage of unwanted messages on the service is about 1%.

So Twitter isn’t spammy, and it broadly reflects the electorate. But can it be gamed?

We already know that Twitter can spread an idea, or meme, rapidly—only four hops are needed before more or less everyone on Twitter sees it. In late 2009 Google unveiled a new product: Real time search. This meant that, atop the usual results to a search, Google would throw in the latest matches from the real time web—in other words, Twitter and its ilk. So getting your tweets up there would be valuable if, say, you were a political operator and you wanted people to hear good things about your candidate, or bad things about your rival. But were people doing this? Two researchers from Wellesley College in Massachusetts wondered.

Panagiotis Takis Metaxas and Eni Mustafaraj studied the local senate race and found that they were. They looked at 185,000 Twitter messages which mentioned the two competing candidates and found that there was plenty of astroturfing going on—where political supporters were creating fake accounts and repeating each other’s messages, and sending them to likely sympathizers, in the hope of their messages hitting the mainstream.

The researchers found one group, apparently linked to an Iowa Republican group, was sending out one tweet a second linking to websites “exposing” their rival’s missteps and misstatements. Overall, the message they sent reached more than 60,000 users. The researchers concluded that “the fact that a few minutes of work, using automated scripts and exploiting the open architecture of social networks such as twitter, makes possible reaching a large audience for free…raises concerns about the deliberate exploitation of the medium.”

The point here is not merely that you’re propagating a point of view. That’s just spam. But by setting up fake Twitter accounts and tweeting  and then repeating these messages, you’re creating the illusion that these views are widespread. We may ignore the first Twitter message we see exposing these views and linking to a website, but will we ignore the second or the third?

This discovery of Twitter astroturfing in one race has prompted researchers at Indiana University to set up a tool they call Truthy—after comedian Stephen Colbert’s term to describe something that someone knows intuitively from the gut—irrespective of evidence, logic or the facts. Their tool has exposed other similar attacks which, while not explosive in terms of growth, are, they wrote in an accompanying paper,  “nevertheless clear examples of coordinated attempts to deceive Twitter users.” And, they point out, the danger with these Twitter messages is that unless they’re caught early, “once one of these attempts is successful at gaining the attention of the community, it will quickly become indistinguishable from an organic meme.”

This is all interesting, for several reasons. First off, it’s only in the past few months that we’ve woken up to what political operators seem to be doing on Twitter. Secondly, while none of these cases achieves viral levels, the relative ease with which these campaigns can be launched suggests that a lot more people will try them out. Thirdly, what does this tell us about the future of political manipulation in social media?

I don’t know, but it’s naïve to think that this is just an American thing. Or a ‘what do you expect in a thriving democracy?’ thing. Less democratically minded organizations and governments are becoming increasingly sophisticated about the way they use the Internet to control and influence public opinion. Evgeny Morozov points to the Lebanon’s Hezbollah, “whose suave manipulation of cyberspace was on display during the 2006 war with Israel”; my journalist friends in Afghanistan say the Taliban are more sophisticated about using the Internet than the Karzai government or NATO.

The good news is that researchers are pushing Twitter to improve their spam catching tools to stop this kind of thing from getting out of hand. But I guess the bigger lesson is this: While social media is an unprecedented window on, and reflection of, the populace, it is also an unprecedented opportunity for shysters, snake oil salesmen and political operators to manipulate what we think we know.

It may be a great channel for the truth, but truthiness may also be one step behind.

Afghanistan’s TV Phone Users Offer a Lesson

By Jeremy Wagstaff

IMG_20100831_202009-1

There’s something I notice amid all the dust, drudgery and danger of Kabul life: the cellphone TVs.

No guard booth—and there are lots of them—is complete without a little cellphone sitting on its side, pumping out some surprisingly clear picture of a TV show.

This evening at one hostelry the guard, AK-47 absent-mindedly askew on the bench, had plugged his into a TV. I don’t know why. Maybe the phone gave better reception.

All I know is that guys who a couple of years ago had no means of communication now have a computer in their hand. Not only that, it’s a television, itself a desirable device. (There are 740 TVs per 1,000 people in the U.S. In Afghanistan there are 3.)

But it doesn’t stop there. I’ve long harped on about how cellphones are the developing world population’s first computer and first Internet device. Indeed, the poorer the country, the more revolutionary the cellphone is. But in places like Afghanistan you see how crucial the cellphone is as well.

Electricity is unreliable. There’s no Internet except in a few cafes, hotels and offices willing to pay thousands of dollars a month. But you can get a sort of 3G service over your phone. The phone is an invisible umbilical cord in a world where nothing seems to be tied down.

Folk like Jan Chipchase, a former researcher at Nokia, are researching how mobile banking is beginning to take hold in Afghanistan. I topped up my cellphone in Kabul via PayPal and a service based in Massachusetts. This in a place where you don’t bat an eyelid to see a donkey in a side street next to a shiny SUV, and a guy in a smart suit brushing shoulders with a crumpled old man riding a bike selling a rainbow of balloons.

Of course this set me thinking. For one thing, this place is totally unwired. There are no drains, no power infrastructure, no fiber optic cables. The cellphone is perfectly suited to this environment that flirts with chaos.

But there’s something else. The cellphone is a computer, and it’s on the cusp of being so much more than what it is. Our phones contain all the necessary tools to turn them into ways to measure our health—the iStethoscope, for example, which enables doctors to check their patients’ heartbeats, or the iStroke, an iPhone application developed in Singapore to give brain surgeons a portable atlas of the inside of someone’s skull.

But it’s obvious it doesn’t have to stop there. iPhone users are wont to say “There’s an app for that” and this will soon be the refrain, not of nerdy narcissists, but of real people with real problems.

When we can use our cellphone to monitor air pollution levels, test water before we drink it, point it at food to see whether it’s gone bad or contains meat, or use them as metal detectors or passports or as wallets or air purifiers, then I’ll feel like we’re beginning to exploit their potential.

In short, the cellphone will become, has become, a sort of Swiss Army penknife for our lives. In Afghanistan that means a degree of connectivity no other medium can provide. Not just to family and friends, but to the possibility of a better life via the web, or at least to the escapism of television.

For the rest of us in the pampered West, we use it as a productivity device and a distraction, but we should be viewing it as a doorway onto a vastly different future.

When crime committed is not just saved on film—from Rodney King to the catwoman of Coventry—but beamed live thro to services that scan activity for signs of danger, the individual may be protected in a way they are presently not.

We may need less medical training if, during the golden hour after an accident, we can use a portable device to measure and transmit vital signs and receive instruction. Point the camera at the wound and an overlay points out the problem and what needs to be done. Point and click triage, anyone?

Small steps. But I can’t help wondering why I’m more inspired by the imaginative and enterprising use of cellphones in places like Afghanistan, and why I’m less than impressed by the vapid self-absorption of the average smart phone user in our First World.

Now I’m heading back to the guard hut to watch the late soap.

links for 2008-09-15

Clock Shock

Clocky1

For those of you who can’t get out of bed in the morning, the alarm clock that outwits you is finally here. I mentioned Clocky in a WSJ column more than a year ago in talking about the problems of ignored alarms:

Efforts to overcome this problem have been inventive, but rarely successful, says Gauri Nanda, a 26-year-old graduate student in the Massachusetts Institute of Technology’s Media Lab. “Just last week a man told me he currently uses three alarm clocks and then asks his friends to hide them,” she says. Ms. Nanda’s solution: an alarm clock called Clocky equipped with outsize wheels and shockproof covering (early prototypes are wrapped in brown shag), that goes off and then, when its snooze button is pressed, skedaddles across the room and hides, requiring owners to get out of bed and find it. By the time they have, the thinking goes, Clocky has done its job because they’re out of bed and wide awake, if a little frustrated.

Gauri tells me the clock is now out and about, although it’s dropped the shaggy pile in favor of robust rubber and plastic, leaping off your nightstand and running erratically around the room making an annoying, R2D2–like noise. (see a video here.)

 I think it’s a great idea, although it’s not the only annoying alarm clock on the market. Uberreview lists some others, including:

How To Infect An Airport

Could it be possible to use Radio Frequency ID tags, or RFID, to transmit viruses? Some researchers reckon so. Unstrung reports that a paper presented at the Pervasive Computing and Communications Conference in Pisa, Italy, the researchers from Vrije Universiteit in Amsterdam, led by Andrew Tanenbaum, show just how susceptible radio-frequency tags may be to malware. “Up until now, everyone working on RFID technology has tacitly assumed that the mere act of scanning an RFID tag cannot modify backend software, and certainly not in a malicious way,” the paper’s authors write. “Unfortunately, they are wrong.”

According to The New Scientist the Vrije Universiteit team found that compact malicious code could be written to RFID tags by replacing a tag’s normal identification code with a carefully written message. This could in turn exploit bugs in a computer connected to an RFID reader. This made it possible, the magazine says, to spread a self-replicating computer worm capable of infecting other compatible, and rewritable, RFID tags.

An RFID tag is small — roughly the size of a grain of rice, the New Scientist says, and contains a tiny chip and radio transmitter capable of sending a unique identification code over a short distance to a receiver and a connected computer. They are widely used in supermarkets, warehouses, pet tracking and toll collection. But it’s still in the early stages of development. Which leaves it vulnerable. Until now, however, it was thought the small internal memory would make it impossible to infect. Not so, say the researchers.

So what would happen, exactly? RFID virus would then find its way into the backend databases used by the RFID software. The paper, Unstrung says, outlines three scenarios: a prankster who replaces an RFID tag on a jar of peanut butter with an infected tag to infect a supermarket chain’s database; a subdermal (i.e., under-the-skin) RFID tag on a pet used to upload a virus into a veterinarian or ASPCA computer system; and, most alarmingly, a radio-frequency bag tag used to infect an airport baggage-handling system. A virus in an airport database could re-infect other bags as they are scanned, which in turn could spread the virus to hub airports as the traveler changes planes.

So how likely is this? Not very, Unstrung quotes Dan Mullen, executive director of AIM Global, a trade association for the barcode and RFID industries, as saying. “If you’re looking at an airport baggage system, for instance, you have to know what sort of tag’s being used, the structure of the data being collected, and what the scanners are set up to gather,” he explains. Red Herring quotes Kevin Ashton, vice president of marketing for ThingMagic, a Cambridge, Massachusetts-based designer of reading devices for RFID systems, as saying the paper was highly theoretical and the theoretical RFID viruses could be damaging only to an “incredibly badly designed system.” Hey, that sounds a bit like a PC.

But he does make a good point: because RFID systems are custom designed, a hacker would have to know a lot about the system to be able to infect it. But that doesn’t mean it can’t be done, and it doesn’t mean it won’t get easier to infect. As RFID becomes more widespread, off-the-shelf solutions are going to become more common. And besides, what will stop a disgruntled worker from infecting a system he is using? Or an attacker obtaining some tags and stealing a reader, say, and then reverse engineering the RFID target?

My instinct would be to take these guys seriously. As with Bluetooth security issues such as Bluesnarfing, the tendency is for the industry itself not to take security seriously until someone smarter than them comes along and shows them why they should do.

The End of Airport WiFi?

An interesting battle is going on in Boston over airport WiFi. If one side wins it may spell the end to WiFi in airports — at least those not operated by the airport itself. The Boston Globe reports that Logan International Airport officials’ ongoing quest to ban airline lounges from offering passengers free WiFi Internet services is angering a growing array of powerful Capitol Hill lobbying groups, who say Logan could set a dangerous nationwide precedent for squelching wireless services:

Soon after activating its own $8-a-day WiFi service in the summer of 2004, the Massachusetts Port Authority, which runs Logan, ordered Continental and American Airlines to shut down WiFi services in their Logan lounges. Massport also ordered Delta Air Lines Inc. not to turn on a planned WiFi service in its new $500 million Terminal A that opened last March. […]

Massport has consistently argued its policy is only trying to prevent a proliferation of private WiFi transmitters that could interfere with wireless networks used by airlines, State Police, and the Transportation Security Administration. WiFi service providers are free to negotiate so-called roaming deals, Massport officials say, that would let their subscribers who pay for monthly access use the Logan network. But major providers including T-Mobile USA have balked at Massport’s proposed terms, saying the airport authority seeks excessive profits.

It all sounds a bit lame to me. My experience of Logan’s WiFi in late 2004 was woeful, although perhaps that has changed, as Massport’s PR later said they were having teething troubles as it had just been installed. But it seems weak to argue that one WiFi service may not affect communications whereas others might;to charge excessively for it seems to suggest the real motive. If interference is the problem, will all those in-office WiFi networks in terminal offices be closed down, and will all onboard WiFi networks be banned too? What about buildings close to the airport?

The scary thing is that if Massport win this other airports are bound to leap aboard. And not just in the U.S. If airport authorities think they can make money out of this, I’m sure they will follow suit. I’m worried. Unless it means better and free WiFi in airports, in which case I’m all for it. Let’s face it, sometimes WiFi services are so bad in airports you feel as if it’s too important a commodity to be left to small bitplayers. More discussion of the issues here and here.

Biometrics Close To The Bone

Further to my column about fingerprint biometric scanners (subscription only ), I’ve heard from  a company working on a different kind of biometric security: Via the bone.

Last week, Mass.-based RSA Security Inc. (the guys who make the SecurID number tag, called ‘a two-factor user authentication system’ in the jargon) announced a joint research collaboration with Israel’s i-Mature, specialists in ‘online age recognition’. The two vow to bring together RSA Security’s cryptographic expertise and i-Mature’s Age-Group Recognition (AGR) technology to “work towards a unique solution that would genuinely improve the safety of the Internet for children, by enabling both adult and children’s sites to restrict their content more reliably to their appropriate audience”:

i-Mature has developed an innovative technology that can determine, through a simple biometric bone-scanning test, whether a user is a child or an adult – and thereby control access to Internet sites and content. AGR technology could help prevent children from accessing adult Internet sites and prevents adults from accessing children’s sites and chat rooms.

As far as I understand it, users wanting to visit a website would be required to press their fist against a small scanner, which would work out whether they are 18 or above, or 13 or younger, and then determine, based on software installed at the website itself, whether they are old enough to visit it:

Although the i-Mature website focuses not on confirming the identity of the user but his/her age group, the press release suggests that RSA’s involvement would fact bring some verification: The project would bring a “unique combination of technologies verifying that the person accessing the age-appropriate site is in fact who they claim to be,” the release says.

Obvious benefits? No need for the website itself to know who the user is or keep any data on them, since the scan is simply confirming age-group. Users can’t transfer their passwords or authentication tag to someone else (unless, I guess, if they happen to be around and ‘fist’ themselves into the computer for another user). Also not much work for the parent or teacher to set things up. It might prove popular with public Internet access, since providers might be able to use to limit underage surfing to a select number of websites.

Downsides? The website the person visits needs to have software installed to match the fist-tag. While some pornographic sites, for example, are going to be delighted to conform and limit access, I can’t imagine all of them are. And how many porn websites are there out there at any given point?

I assume RSA and (the rather oddly named) i-Mature are going to limit their targets to chat-rooms and more general websites, rather than the pornographic web. Indeed, the press release suggests as much: “The collaboration will include joint research as well as joint marketing activities around age-group recognition, including market education and engagement with government policy makers.”

Indeed, i-Mature has set its sights more broadly than the net: The press release says:

The protection and safety of children is also required outside the Internet arena. The AGR system complies with this since it is also compatible with mobile phones, television, video and DVD systems that can use AGR technology to prevent children from viewing harmful content. i-Mature can also partner with developers of computer games, online games and video games to block extremely violent and un-educational materials.

Sounds like something worth watching.

How To Scoop The U.S. Press Corps

Perhaps we journalists need better tools to find out the stories we’re looking for.

According to Ed Cone, a posting on an aviation website late on Monday may have been the first evidence of the Kerry/Edwards tie-up: A poster to the US Airways forum called ‘aerosmith’ said he/she had spotted John Kerry’s plane in a Pittsburgh hangar, where “John Edwards’ decals were being put on engine cowlings and upper fuselage”. The time: 9:44 PM. (11.45 PM EDT)

Unfortunately the poster, a US Airways employee called Bryan Smith, wasn’t allowed to take a photo, which would have erased any doubt, but confirmation appeared to come from another poster, ‘Prince of PAWOBs’ (PAWOB, in flying lingo is a Passenger Arriving With Out Bag), who posted a few hours later that he/she had heard “North Ramp control on 130.775 Mhz talking to another aircraft about it tonight”.

Ironically, within a few hours The New York Post was hitting the streets with its own ‘scoop’ about Gephardt. The first mainstream journalist to report Kerry’s correct selection was Andrea Mitchell on NBC’s “Today” show at 7:30 a.m. Tuesday — eight hours after aerosmith’s posting. Here are AP’s version and UPI’s version of the aerosmith story.

There’s a lot of talk about blogs being the Next Journalism, and I for one believe that that is how a lot of folk are going to get, already getting, their news. It’s a great development, and I don’t worry that people get wrong information: Time will weed out the good stuff from the bad. But maybe there’s a broader lesson here, especially for us journalists. There are other ways to research stories and find scoops than the obvious political channels and sources. This time around it was an obscure website for flying enthusiasts.