Workplace surveillance, from Russia with love

(Part 3 of a series on post-covid remote working. Part 2 here)

Ok, so you’ve decided to install some workplace surveillance software, despite all the good reasons why you shouldn’t. Do you know exactly what you’re letting yourself in for?

Staffcop logo 3

A basic question: Who, exactly, are these companies?

Let’s take a look at one: StaffCop — the dude with the shades. It’s owned by Atom (sometimes Atomic) Security Inc (sometimes LLC), which despite its name is actually based in the Russian city of Novosibirsk, in southwest Siberia. (Here’s StaffCop’s Russian website.)

And what do they do?

A datasheet for its enterprise product promises “employee monitoring the way you couldn’t imagine!” which probably sounds better in Russian. Staffcop is refreshingly candid about what it offers — all the usual stuff, as well as a ‘wayback machine’ to rewind and see what an employee was doing at any specified period in the past.

It can even activate computer microphones to “actually hear what’s going on around specific workstations and specific times.” (It’s not clear to me whether this is part of the ‘wayback machine’s’ capabilities. The datasheet also mentions being able to activate the computer’s webcam. The latest version of its software, released on June 22, includes the following:

  • can record any audio in any application
  • can recognise faces on web-cam snapshots (presumably those photos discreetly taken by the employees’ webcam)

In short, StaffCop is basically a way to hack into your employees’ computers. And that, of course, raises not only ethical questions, but also practical ones. If a company is using StaffCop, say, what vulnerabilities might they have opened up? There are two possibilities — does the hacking software itself incorporate inadvertent vulnerabilities, or render existing software vulnerable? And secondly, where is all this data the company is collecting on its employees going, besides the boss’ console?

Well, to answer the first question, StaffCop has previous. In 2015, it was found to be using a piece of software called Redirector, which was developed by a now defunct company called Komodia, which intercepts traffic on a target computer. The software was built with the goal of snooping in mind, along with manipulating data (including decrypting it), injecting ads etc. Vulnerabilities with the software were discovered in 2015, which would have allowed third parties to conduct man-in-the-middle attacks, which are exactly what they sound like — someone grabbing data on its journey between two computers.

So what about the company name? Any time I see a company having slightly different versions of its name, I get nosy. StaffCop, it transpires, has its roots deep in the world of spam.

Atom Security Inc. was set up in 2001 and says that it is (was) a Microsoft Certified Partner. The CEO of the company is cited as one Dmitry Kandybovich, who appears to have 61% of the Russian entity LLC Atom Bezopasnost, who on his rather threadbare LinkedIn profile is also listed as chief of sales for one AtomPark Software.

AtomPark Software has a somewhat different pedigree, focused mainly on mass mail software. Indeed that’s its domain name. AtomPark has long been in the cross hairs of the anti-spam brigade: The SpamHaus Project has a whole page dedicated to them, and in particular one Evgeny Medvednikov, who it says is (or was) owner of the domains staffcop.com, among others. 4

Medvednikov seems to have moved on, and is now based in New York, according to his LinkedIn profile where he lists his achievements simply: “Run and scale Internet projects. Again and again. Can not disclose them all.” (AtomPark is mentioned in a recommendation he gives one of his former employees.) He has invested in several U.S. companies, mostly email marketing companies. He founded SendPulse, a company which combines multi-channel marketing with chatbots, automating much of the process. It claims amongst its clients PwC, Radisson and Swatch.

And that pretty much squares the circle. I’m definitely not saying that just because StaffCop is based in Russia that it’s not qualified or trustworthy. I’m not saying that its roots in spam and use of dubious third-party software disqualifies it. Nor am I saying that all other companies doing this kind of thing have similar backgrounds.

But it should be obvious by now, after reading these three posts, that the nature of these tools — the intent, and the technical knowhow to implement that intent — inevitably leads them into an ethically compromised world, which is where spam and hacking have long made their home. By definition and design they are snooping on a user, using subterfuge and overriding, or bypassing, existing security features of the computer system. That compromises the work computer, and it also compromises the individual.

It also, inevitably, compromises the user’s trust — in this case, in their own boss.

If as a boss you can’t trust your employee, and you go down this road, then don’t expect your employee to trust you.

Employee snooping is big business. Expect it to get bigger

I wrote previously about how snooping on employees is going to become the norm as managers scramble to deal with a workforce that is reluctant — or unable — to return to the workplace. Enabling this will be a host of tools available for companies to do this. It’ll be impossible for a lot of bosses to resist.

There’s already a whole market — worth $4 billion by 2023 according to this report — of employee surveillance tools. Some of them sound cute (Hubstaff, Time Doctor), some less so (VeriClock, ActivTrak, StaffCop and Work Examiner).

UntitledImage

The second question asked of you before you can access Time Doctor’s home page. 

They all feed off the fear of the Manager By-Line-of-Sight, like Workpuls:

Remote work has certainly made employees more independent from their superiors, if nothing else, then because they simply aren’t in the same physical location. That means you are never quite sure if the staff is watching funny videos or actually working.

While no one expects people to work for eight hours straight, it’s important to ensure that they are working on tasks that actually have high priority, and not just answer a few emails and go out for ice-cream and rollerblading for the rest of the day.

This perception is fed by a longstanding piece of ‘data’ which claims that workers actually only work 2 hours and 53 minutes in any work day. This study is regularly cited, though its source rarely, as proof positive we’re all lazy gits when it comes to home working. I’ve written a separate piece debunking this little gem.

So what do these tools do? Well, most monitor what software you’re using and what websites you’re logged into, for a start. The idea is to virtually handcuff you to work. For example, Time Doctor will

  • ask the user if they’re still working when they visit a social media site. “Whenever an employee accesses unproductive sites like these, the app automatically sends them a pop-up asking them if they’re still working. This little nudge is usually enough to get them off the social media site and back to work.”
  • Managers will have access to a ‘Poor Time Use’ report that details what sites an employee accessed and how long they spent there. Time Doctor can also take screenshots of employees’ screens at random intervals to ensure that they’re on productive sites.
  • Some, like Keeper, will monitor employees’ browsing history, ostensibly to check they’re not venturing onto the dark web.

Workpuls, meanwhile, boasts

Our all-seeing agent captures all employee actions. From app and website usage, to words typed in a program, right down to detecting which tasks are being worked on based on mouse clicks.

The more sinister aspect to this is that managers not only don’t trust their employees to work remotely, but they don’t trust them not to steal stuff. And we’re not talking paperclips. This is called Data Loss Prevention, or DLP, and is itself big business. One estimate has the market worth $1.21 billion in 2019, rising to $3.75 billion by 2025.

These tools include (this according to a deck from Teramind)

  • machine learning which scans an employee’s workflow, ‘fingerprinting’ documents and then tracking any changes and movement
  • ‘on the fly’ content discovery
  • clipboard monitoring — everything you copy and paste will be collected
  • advanced optical character recognition: think studying images and videos watched and uploaded by employees to check for steganographic data exfiltration (steganography is when data is hidden in a supposedly harmless message, often a picture.)

It’s not so much the eye-popping technology involved, as the realisation that everything that an employee does on a work computer (or a work-related computer) can be, and probably is, being monitored.

To be fair, companies like Teramind are focusing less on employee productivity and more on catching the bad apples. But these tools still sound to me overly intrusive. And in my next post I’ll show why.

The Internet of Things Could Kill You, Or At Least Jab You With A Screwdriver

 

2017 08 21 18 25 05

Lucas and his killer robots. Photo: JW

(This is the transcript of my BBC World Service piece which ran today. The original Reuters story is here.) 

I’m sure you’ve seen those cute little humanoid robots around? They’re either half size, or quarter size, they look like R2D2, and if you believe the ads, they could play with your kids or hold a screwdriver while you fix something under the sink. Some of them under $1,000. Nice, right?

Well, maybe not. The problem with these robots is that, a lot like everything else connected to the internet, they’re vulnerable to hackers. Lucas Apa, a researcher from ioactive, brought a couple into my office recently to show just how easy it is. These robots connect through wifi so you can control them, but that connection is really easy to hack, he showed. He says there’s very little if any security involved at all. In short, a bad guy could take over control of the robots and make them move, or monitor you — what you’re saying, what you’re doing — and send that back out to people. Or attack you. 

To prove it he made one of the robots wander around as if he were drunk, while another, mimicking the ad, jabbed a screwdriver viciously while reciting lines from horror movie doll Chucky. These things, frankly, are scary enough with their unblinking eyes and the way they tilt their head to face you, even if you move.  But Chucky’s voice and the screwdriver really freaked me out. 

Lucas’ demonstration was just that: this is what could happen, he says, if we allow these things into our home and let kids play with them. He says there’s no evidence so far anyone has actually done this. The scariest thing, though, was that he’d been in touch with the half-dozen manufacturers of these things, some based in the US, some in Asia, for months and for the most part they’d either ignored him or said it wasn’t a problem. I got back to him recently and asked him whether things had improved when he’d gone public . No, he says; the companies that say they’ve addressed the problems haven’t. 

For those of us watching the internet of things this is a familiar refrain. There are so many things connecting to the internet these days it’s not surprising that there are problems.  There are dozens of devices in a home connecting, or trying to connect, to the wifi network. A senior cybersecurity guy told me he had found a bug in his wifi-connected barbeque that could theoretically have allowed someone to start a fire remotely. 
In short. the people making these devices do not treat security as a priority, and indeed may not understand it.

The irony is that these are physical devices, not just computers, and so they could actually do more real-world damage, if not cause us physical harm, than a computer sitting in the corner. Sure, the latter contains credit cards and personal data, but we rely on these connected devices to feed us, carry us, clean us, protect us from intruders. 

As Lucas showed with his Chucky-esque robot, this is not something we should be doing without a) thinking hard about how useful this is and b) quizzing the companies — hard — about how secure their devices are.  I’m not convinced we’ve really thought this all the way through.

I’m An Airline, Fly Me

This an email from a bona fide airline: 

Dear Sir/Madam,

Please be informed that your transaction with [international carrier] has been confirmed. Due to fraud prevention procedure against Credit Card transaction, we would like to validate your recent transaction with [international carrier] by filling information below :

Passenger(s) name :
Route :
Date of Travel :
Cardholder name :
Address :

Also, we need to confirm and validate your name and last four digit of your card number. Please kindly provide scanned/image of your front side credit card that used to buy the ticket. You may cover the rest information on the card. Please reply in 8 hours after received this email or we will cancel the reservation.

Thank you for your cooperation.

Best Regards,
Verification Data Management

Ripe for Disruption: Bank Authentication

One thing that still drives me crazy, and doesn’t seem to have changed with banks, is they way they handle fraud detection with the customer. Their sophisticated algorithms detect fraudulent activity, they flag it, suspend the card, and give you a call, leaving a message identifying themselves as your bank and asking you to call back a number — which is not on the back of the credit card you have.

So, if you’re like me, you call back the number given in the voice message and have this conversation:

Hello this is Bank A’s fraud detection team, how can I help you today?
Hi, quoting reference 12345.
Thank you, I need some verification details first. Do yo have your credit card details to hand?
I do, but this number I was asked to call was not on the back of my card, so I need some evidenc from you that you are who you say you are first.
Unfortunately, I don’t have anything that would help there.

So then you have to call the number on the card, and then get passed from pillar to post until you reach the right person.

How is this still the case in 2016, and why have no thoughtful disruptive folk thought up an alternative? Could this be done on the blockchain (only half sarcastic here)? I’d love to see banks, or anyone, doing this better.

A simple one would be for them to have a safe word for each client, I should think, which confirms to me that they are who they say they are. It seems silly that they can’t give some information — it doesn’t even have to be private information — that would show who they are, but only a customer would know.

Mind the air-gap: Singapore’s web cut-off balances security, inconvenience | Reuters

A piece I co-wrote on Singapore’s decision to effectively air-gap most of its government computers — beyond security, military and intelligence. This is not something they’ve done lightly, but it does feel as if they might not have thought it all the way through. On the other hand, there were quite a few people I spoke to who said this might be the thin end of a larger wedge. And what does this mean for the cybersecurity industry? 

Mind the air-gap: Singapore’s web cut-off balances security, inconvenience | Reuters:

By Jeremy Wagstaff and Aradhana Aravindan | SINGAPORE

Singapore is working on how to implement a policy to cut off web access for public servants as a defense against potential cyber attack – a move closely watched by critics who say it marks a retreat for a technologically advanced city-state that has trademarked the term ‘smart nation’.

Some security experts say the policy, due to be in place by May, risks damaging productivity among civil servants and those working at more than four dozen statutory boards, and cutting them off from the people they serve. It may only raise slightly the defensive walls against cyber attack, they say.

Ben Desjardins, director of security solutions at network security firm Radware, called it ‘one of the more extreme measures I can recall by a large public organization to combat cyber security risks.’ Stephen Dane, a Hong Kong-based managing director at networking company Cisco Systems, said it was ‘a most unusual situation’, and Ramki Thurimella, chair of the computer science department at the University of Denver, called it both ‘unprecedented’ and ‘a little excessive.’

But not everyone takes that view. Other cyber security experts agree with Singapore authorities that with the kind of threats governments face today it has little choice but to restrict internet access.

FireEye, a cyber security company, found that organizations in Southeast Asia were 80 percent more likely than the global average to be hit by an advanced cyber attack, with those close to tensions over the South China Sea – where China and others have overlapping claims – were particularly targeted.

Bryce Boland, FireEye’s chief technology officer for Asia Pacific, said Singapore’s approach needed to be seen in this light. ‘My view is not that they’re blocking internet access for government employees, it’s that they are blocking government computer access from Internet-based cyber crime and espionage.’

AIR-GAPPING

Singapore officials say no particular attack triggered the decision, but noted a breach of one ministry last year. David Koh, chief executive of the newly formed Cyber Security Agency, said officials realized there was too much data to secure and the threat ‘is too real.’

Singapore needed to restrict its perimeter, but, said Koh, ‘there is no way to secure this because the attack surface is like a building with a zillion windows, doors, fire escapes.’

Koh said he was simply widening a practice of ministries and agencies in sensitive fields, where computers are already disconnected, or air-gapped, from the Internet.

Public servants will still be able to surf the web, but only on separate personal or agency-issued devices.

Air-gapping is common in security-related fields, both in government and business, but not for normal government functions. Also, it doesn’t guarantee success.

Anthony James, chief marketing officer at cyber security company TrapX Security, recalled one case where an attacker was able to steal data from a law enforcement client after an employee connected his laptop to two supposedly separated networks. ‘Human decisions and related policy gaps are the No.1 cause of failure for this strategy,’ he said.

‘STOPPING THE INEVITABLE’?

Indeed, just making it work is the first headache.

The Infocomm Development Authority (IDA) said in an email to Reuters that it has worked with agencies on managing the changes ‘to ensure a smooth transition,’ and was ‘exploring innovative work solutions to ensure work processes remain efficient.’

Johnny Wong, group director at the Housing Development Board’s research arm, called the move ‘inconvenient’, but said ‘it’s something we just have to adapt to as part of our work.’

At the Land Transport Authority, a group director, Lew Yii Der, said: ‘Lots of committees are being formed across the public sector and within agencies like mine to look at how we can work around the segregation and ensure front-facing services remain the same.’

Then there’s convincing the rank-and-file public servant that it’s worth doing – and not circumventing.

One 23-year-old manager, who gave only her family name, Ng, said blocking web access would only harm productivity and may not stop attacks. ‘Information may leak through other means, so blocking the Internet may not stop the inevitable from happening,’ she said.

It’s not just the critics who are watching closely.

Local media cited one Singapore minister as saying other governments, which he did not name, had expressed interest in its approach.

Whether they will adopt the practice permanently is less clear, says William Saito, a special cyber security adviser to the Japanese government. ‘There’s a trend in private business and some government agencies’ in Asia to go along similar lines, he said, noting some Japanese companies cut internet access in the past year, usually after a breach.

‘They cut themselves off because they thought it was a good idea,’ he told Reuters, ‘but then they realized they were pretty dependent on this Internet thing.’

Indeed, some cyber security experts said Singapore may end up regretting its decision.

‘I’m fairly certain they would regret it and wind up far behind other nations in development,’ said Arian Evans, vice president of product strategy at RiskIQ, a cyber security start-up based in San Francisco.

The decision is ‘surprising for a country like Singapore that has always been a leader in innovation, technology and business,’ he said.

(Reporting by Jeremy Wagstaff and Aradhana Aravindan, with additional reporting by Paige Lim; Editing by Ian Geoghegan)

BBC – Cybercrime: One of the Biggest Ever

My contribution to the BBC World Service – Business Daily, Cybercrime: One of the Biggest Ever

Transcript below. Original Reuters story here

If you think that all this cybersecurity stuff doesn’t concern you, you’re probably right. If you don’t have any dealings with government, don’t work for an organisation or company, and you never use the Internet. Or an ATM. Or go to the doctor. Or have health insurance. Or a pension.

You get the picture. These reports of so-called data breaches — essentially when some bad guy gets into a computer network and steals information — are becoming more commonplace. And that’s your data they’re stealing, and it will end up in the hands of people you try hard not to let into your house, your car, your bank account, your passport drawer, your office, your safe. They may be thieves, or spies, or activists, or a combination of all three.

And chances are you won’t ever know they were there. They hide well, they spend a long time rooting around. And then when they’ve got what they want, they’re gone. Not leaving a trace.

In fact, a lot of the time we only know they were there when we stumble upon them looking for something else. It’s as if you were looking for a mouse in the cellar and instead stumbled across a SWAT team in between riffling through your boxes, cooking dinner and watching TV on a sofa and flat screen they’d smuggled in when you were out.

Take for example, the case uncovered by researchers at a cybersecurity company called RSA. RSA was called in by a technology company in early 2014 to look at an unrelated security problem. The RSA guys quickly realized there was a much bigger one at hand: hackers were inside the company’s network. And had been, unnoticed, for six months.

Indeed, as the RSA team went through all the files and pieced together what had happened, they realised the attack went back even further.

For months the hackers — almost certainly from China — had probed the company’s defenses with software, until they found a small hole.

On July 10, 2013, they set up a fake user account at an engineering website. They loaded what is called malware — a virus, basically — to another a site. The trap was set. Now for the bait. Forty minutes later, the fake account sent emails to company employees, hoping to fool one into clicking on a link which in turn would download the malware and open the door.

Once an employee fell for the email, the hackers were in, and within hours were wandering the company’s network. For the next 50 days they mapped the network, sending their findings back to their paymasters. It would be they who would have the technical knowledge, not about hacking, but about what documents they wanted to steal.

Then in early September they returned, with specific targets. For weeks they mined the company’s computers, copying gigabytes of data. They were still at it when the RSA team discovered them nearly five months later.

Having pieced it all together, now the RSA team needed to kick the hackers out. But that would take two months, painstakingly retracing their movements, noting where they had been in the networks and what they had stolen. Then they locked all the doors at once.

Even then, the hackers were back within days, launching hundreds of assaults through backdoors, malware and webshells. They’re still at it, months later. They’re probably still at it somewhere near you too.

Spy in the Sky – are planes hacker-proof?

My take on aviation cybersecurity for Reuters: Plane safe? Hacker case points to deeper cyber issues:

“Plane safe? Hacker case points to deeper cyber issues

BY JEREMY WAGSTAFF

Security researcher Chris Roberts made headlines last month when he was hauled off a plane in New York by the FBI and accused of hacking into flight controls via his underseat entertainment unit.

Other security researchers say Roberts – who was quoted by the FBI as saying he once caused ‘a sideways movement of the plane during a flight’ – has helped draw attention to a wider issue: that the aviation industry has not kept pace with the threat hackers pose to increasingly computer-connected airplanes.

Through his lawyer, Roberts said his only interest had been to ‘improve aircraft security.’

‘This is going to drive change. It will force the hand of organizations (in the aviation industry),’ says Jonathan Butts, a former US Air Force researcher who now runs a company working on IT security issues in aviation and other industries.

As the aviation industry adopts communication protocols similar to those used on the Internet to connect cockpits, cabins and ground controls, it leaves itself open to the vulnerabilities bedevilling other industries – from finance to oil and gas to medicine.

‘There’s this huge issue staring us in the face,’ says Brad Haines, a friend of Roberts and a security researcher focused on aviation. ‘Are you going to shoot the messenger?’

More worrying than people like Roberts, said Mark Gazit, CEO of Israel-based security company ThetaRay, are the hackers probing aircraft systems on the quiet. His team found Internet forum users claiming to have hacked, for example, into cabin food menus, ordering free drinks and meals.

That may sound harmless enough, but Gazit has seen a similar pattern of trivial exploits evolve into more serious breaches in other industries. ‘It always starts this way,’ he says.

ANXIOUS AIRLINES

The red flags raised by Roberts’ case are already worrying some airlines, says Ralf Cabos, a Singapore-based specialist in inflight entertainment systems.

One airline official at a recent trade show, he said, feared the growing trend of offering inflight WiFi allowed hackers to gain remote access to the plane. Another senior executive demanded that before discussing any sale, vendors must prove their inflight entertainment systems do not connect to critical flight controls.

Panasonic Corp and Thales SA, whose inflight entertainment units Roberts allegedly compromised, declined to answer detailed questions on their systems, but both said they take security seriously and their devices were certified as secure.

Airplane maker Boeing Co says that while such systems do have communication links, ‘the design isolates them from other systems on planes performing critical and essential functions.’ European rival Airbus said its aircraft are designed to be protected from ‘any potential threats coming from the In-Flight-Entertainment System, be it from Wi-Fi or compromised seat electronic boxes.’

Steve Jackson, head of security at Qantas Airways Ltd, said the airline’s ‘extremely stringent security measures’ would be ‘more than enough to mitigate any attempt at remote interference with aircraft systems.’

CIRCUMVENTING

But experts question whether such systems can be completely isolated. An April report by the U.S. General Accountability Office quoted four cybersecurity experts as saying firewalls ‘could be hacked like any other software and circumvented,’ giving access to cockpit avionics – the machinery that pilots use to fly the plane.

That itself reflects doubts about how well an industry used to focusing on physical safety understands cybersecurity, where the threat is less clear and constantly changing.

The U.S. National Research Council this month issued a report on aviation communication systems saying that while the Federal Aviation Administration, the U.S. regulator, realized cybersecurity was an issue, it ‘has not been fully integrated into the agency’s thinking, planning and efforts.’

The chairman of the research team, Steven Bellovin of Columbia University, said the implications were worrying, not just for communication systems but for the computers running an aircraft. ‘The conclusion we came to was they just didn’t understand software security, so why would I think they understand software avionics?’ he said in an interview.

SLOW RESPONSE

This, security researchers say, can be seen in the slow response to their concerns.

The International Civil Aviation Organisation (ICAO) last year highlighted long-known vulnerabilities in a new aircraft positioning communication system, ADS-B, and called for a working group to be set up to tackle them.

Researchers like Haines have shown that ADS-B, a replacement for radar and other air traffic control systems, could allow a hacker to remotely give wrong or misleading information to pilots and air traffic controllers.

And that’s just the start. Aviation security consultant Butts said his company, QED Secure Solutions, had identified vulnerabilities in ADS-B components that could give an attacker access to critical parts of a plane.

But since presenting his findings to vendors, manufacturers and the industry’s security community six months ago he’s had little or no response.

‘This is just the tip of the iceberg,’ he says.

(Additional reporting by Siva Govindasamy; Editing by Ian Geoghegan)”

Chinese hackers target Southeast Asia, India, researchers say

Chinese hackers target Southeast Asia, India, researchers say | Reuters

My piece on FireEye’s report about hackers. Other reports have appeared since. 

Hackers, most likely from China, have been spying on governments and businesses in Southeast Asia and India uninterrupted for a decade, researchers at internet security company FireEye Inc said.

In a report released on Monday, FireEye said the cyber espionage operations dated back to at least 2005 and ‘focused on targets – government and commercial – who hold key political, economic and military information about the region.’

‘Such a sustained, planned development effort coupled with the (hacking) group’s regional targets and mission, lead us to believe that this activity is state-sponsored – most likely the Chinese government,’ the report’s authors said.

Bryce Boland, Chief Technology Officer for Asia Pacific at FireEye and co-author of the report, said the attack was still ongoing, noting that the servers the attackers used were still operational, and that FireEye continued to see attacks against its customers, who number among the targets.

Reuters couldn’t independently confirm any of the assertions made in the report.

China has always denied accusations that it uses the Internet to spy on governments, organizations and companies.

Asked about the FireEye report on Monday, foreign ministry spokesman Hong Lei said: ‘I want to stress that the Chinese government resolutely bans and cracks down on any hacking acts. This position is clear and consistent. Hacking attacks are a joint problem faced by the international community and need to be dealt with cooperatively rather than via mutual censure.’

The Cyberspace Administration of China, the Internet regulator, didn’t immediately respond to written requests for comment.

China has been accused before of targeting countries in South and Southeast Asia. In 2011, researchers from McAfee reported a campaign dubbed Shady Rat which attacked Asian governments and institutions, among other targets.

Efforts by the 10-member Association of Southeast Asian Nations (ASEAN) to build cyber defenses have been sporadic. While ASEAN has long acknowledged its importance, ‘very little has come of this discourse,’ said Miguel Gomez, a researcher at De La Salle University in the Philippines.

The problem is not new: Singapore has reported sophisticated cyber-espionage attacks on civil servants in several ministries dating back to 2004.

UNDETECTED

The campaign described by FireEye differs from other such operations mostly in its scale and longevity, Boland said.

He said the group appeared to include at least two software developers. The report did not offer other indications of the possible size of the group or where it’s based.

The group remained undetected for so long it was able to re-use methods and malware dating back to 2005, and developed its own system to manage and prioritize attacks, even organizing shifts to cope with the workload and different languages of its targets, Boland told Reuters.

The attackers focused not only on governments, but on ASEAN itself, as well as corporations and journalists interested in China. Other targets included Indian or Southeast Asian-based companies in sectors such as construction, energy, transport, telecommunications and aviation, FireEye says.

Mostly they sought to gain access by sending so-called phishing emails to targets purported to come from colleagues or trusted sources, and containing documents relevant to their interests.

Boland said it wasn’t possible to gauge the damage done as it had taken place over such a long period, but he said the impact could be ‘massive’. ‘Without being able to detect it, there’s no way these agencies can work out what the impacts are. They don’t know what has been stolen.’

Pornchai Rujiprapa, Minister of Information and Communication Technology for ASEAN member Thailand, said the government was proposing a new law to combat cyber attacks as existing legislation was outdated.

‘So far we haven’t found any attack so big it threatens national security, but we are concerned if there is any in the future. That’s why we need a new law to handle it,’ he told Reuters.

(Additional reporting by Ben Blanchard in BEIJING and Pracha Hariraksapitak in BANGKOK; Editing by Miyoung Kim and Ian Geoghegan)”

(Via.)

BBC: Beyond the Breach

The script of my Reuters story on cybersecurity. Podcast available here (mp3)

If you’re getting tired of internet security companies using images of padlocks, moats, drawbridges and barbed wire in their ads, then chances are you won’t have to put up with them much longer.

Turns out that keeping the bad guys out of your office network has largely failed. All those metaphors suggesting castles, unassailable battlements, locked doors are being quietly replaced by another shtick: the bad guys are in your network, but we’ll find them, watch what they do, and try to ensure they don’t break anything or steal anything valuable.

Which is slightly worrying, if you thought firewalls, antivirus and the like were going to save you.

You’re probably tired of the headlines about cybersecurity breaches: U.S. insurer Anthem Inc saying hackers may have made off with some 80 million personal health records, while others raided Sony Pictures’ computers and released torrents of damaging emails and employee data.

Such breaches, say people in the industry, show the old ways have failed, and now is the chance for younger, nimbler companies selling services to protect data and outwit attackers. These range from disguising valuable data, diverting attackers up blind alleys, and figuring out how to mitigate breaches once the data has already gone. It’s a sort of cat and mouse game, only going on inside your computers.

Cybersecurity, of course, is big business. $70 billion was spent on it last year.

Of course, we’re partly to blame. We insist on using our tablets and smartphones for work; we access Facebook and LinkedIn from the office. All this offers attackers extra opportunities to gain access to their networks.

But it’s also because the attackers and their methods have changed. Cyber criminals and spies are being overshadowed by politically or religiously motivated activists, and these guys don’t want to just steal stuff, they want to hurt their victim. And they have hundreds of ways of doing it.

And they’re usually successful. All these new services operate on the assumption that the bad guy is already inside your house, as it were. And may have been there months. Research by IT security company FireEye found that “attackers are bypassing conventional security deployments almost at will.” Across industries from legal to healthcare it found nearly all systems had been breached.

Where there’s muck there’s brass, as my mother would say. Funding these start-ups are U.S- and Europe-based venture capital firms which sense another industry ripe for disruption.

Google Ventures and others invested $22 million in ThreatStream in December, while Bessemer Venture Partners last month invested $30 million in iSIGHT Partners.

Companies using these services aren’t your traditional banks and  whatnot. UK-based Darktrace, which uses maths and machine learning to spot abnormalities in a network that might be an attack, has a customers like a British train franchise and a Norwegian shipping insurer.

But it’s early days. Most companies still blithely think they’re immune, either because they think they don’t have anything worth stealing or deleting, or because they think a firewall and an antivirus program are enough.

And of course, there’s another problem. As cyber breaches get  worse, and cybersecurity becomes a more valuable business, expect the hype, marketing and dramatic imagery to grow, making it ever more confusing for the lay person to navigate.

I’ve not seen them yet, but I’m guessing for these new companies the shield and helmet images will be replaced by those of SAS commandos, stealthily patrolling silicon corridors. Or maybe it’ll be Tom, laying mousetraps for his nemesis. Might be apt: Jerry the cheese thief always seemed to win.