Singapore’s M1 aims narrowband deployment at the sea

Singapore telco M1 is getting Nokia to install an NB-IoT network atop its 4G one, interestingly with an eye not just to land but to sea. 

NB-IoT stands for Narrowband Internet of Things, and is the GSM world’s answer to narrowband technologies such as LoRa and Sigifox that threaten to take away a chunk of their business when the Internet of things does eventually take off. Why use expensive modems and services when you’re just trying to connect devices which want to tell you whether they’re on or off, full or empty, fixed or broken?  

Techgoondu reports: “While that network caters to heavy users who stream videos or songs on the go, a separate network that M1 is setting up at the same time is aimed at the smart cars, sensors and even wearables.

They said pricing will likely vary with each solution or package, with some companies saving costs from deploying large amounts of connected sensors. However, others that require the bandwidth, say, to deliver surveillance videos over the air, would likely stick with existing 4G networks.

And while many NB-IoT devices are still on the drawing board – standards for the network were only finalised in June – M1 executives were upbeat about jumping on the bandwagon early.

Alex Tan, the telco’s chief innovation officer, said the technology would open up new business opportunities in the years ahead.”

A press release from M1 says it’s working with the ports authority — Singapore is one of the biggest ports in the world — to  “explore the deployment of a network of offshore sensors to augment the situational awareness of our port waters,” according to Andrew Tan, Chief Executive of the Maritime and Port Authority, MPA.

This follows Sigfox’s deployment in the city state last month. It also pips to the post rival Singtel who have been talking since February about running a trial of NB-IoT with Ericsson.  (Update: “Our preparation to trial NB-IoT is well underway. We are working with our vendors and industry partners to conduct lab trials in December, with a view to launch an NB-IoT network by mid-2017.”)

Here’s my earlier piece on LoRa

Pocket-sized Smartphone Breathalyzer

Further to my piece on smell sensing tech, it seems that breathalyzers, which use gas sensors like this one: Alcohol Gas Sensor, are getting smaller. This one attaches to a smartphone, fits in a pocket and costs $35. (Via Interesting Engineering)

That’s not the cheapest one out there — this BACtrack Ultra-Portable Personal Keychain Breathalyzer probably is — but I think it’s probably the cheapest that connects to a phone. 

Some more links on the matter: 

Drinkmate | Specifications

 Blood alcohol content – Wikipedia, the free encyclopedia

 

BBC World Service – Smell tech

At the end of this program is my piece on smell technology, if you like that kind of thing. BBC World Service – Business Daily, UK FinTech Mulls a Post-Brexit Future (with everything else going on it might seem a bit flippant, or maybe light relief. 

Can the UK’s financial technology or FinTech sector maintain its global lead after Brexit? We speak to Lawrence Wintermeyer, the chairman of the industry’s trade body Innovate Finance, about what he hopes the British government will negotiate in a new deal with the EU. Also, Michael Pettis, professor of finance at Peking University, tells us what Brexit looks like from China and why financial markets have been resilient to the initial shock of the referendum’s result. Plus, what’s the point of a smart phone that can smell? Jeremy Wagstaff, Thomson Reuters’ chief technology correspondent for Asia, says you may be surprise. 

Nose job: smells are smart sensors’ last frontier | Reuters

My piece for Reuters about the technology of smell: Nose job: smells are smart sensors’ last frontier | Reuters:

Nose job: smells are smart sensors’ last frontier

SINGAPORE | BY JEREMY WAGSTAFF

Phones or watches may be smart enough to detect sound, light, motion, touch, direction, acceleration and even the weather, but they can’t smell.

That’s created a technology bottleneck that companies have spent more than a decade trying to fill. Most have failed.

A powerful portable electronic nose, says Redg Snodgrass, a venture capitalist funding hardware start-ups, would open up new horizons for health, food, personal hygiene and even security.

Imagine, he says, being able to analyze what someone has eaten or drunk based on the chemicals they emit; detect disease early via an app; or smell the fear in a potential terrorist. ‘Smell,’ he says, ‘is an important piece’ of the puzzle.

It’s not through lack of trying. Aborted projects and failed companies litter the aroma-sensing landscape. But that’s not stopping newcomers from trying.

Like Tristan Rousselle’s Grenoble-based Aryballe Technologies, which recently showed off a prototype of NeOse, a hand-held device he says will initially detect up to 50 common odors. ‘It’s a risky project. There are simpler things to do in life,’ he says candidly.

MASS, NOT ENERGY

The problem, says David Edwards, a chemical engineer at Harvard University, is that unlike light and sound, scent is not energy, but mass. ‘It’s a very different kind of signal,’ he says.

That means each smell requires a different kind of sensor, making devices bulky and limited in what they can do. The aroma of coffee, for example, consists of more than 600 components.

France’s Alpha MOS (AMOS.PA) was first to build electronic noses for limited industrial use, but its foray into developing a smaller model that would do more has run aground. Within a year of unveiling a prototype for a device that would allow smartphones to detect and analyze smells, the website of its U.S.-based arm Boyd Sense has gone dark. Neither company responded to emails requesting comment.

The website of Adamant Technologies, which in 2013 promised a device that would wirelessly connect to smartphones and measure a user’s health from their breath, has also gone quiet. Its founder didn’t respond to emails seeking comment.

For now, start-ups focus on narrower goals or on industries that don’t care about portability.

California-based Aromyx, for example, is working with major food companies to help them capture a digital profile for every odor, using its EssenceChip. Wave some food across the device and it captures a digital signature that can be manipulated as if it were a sound or image file.

But, despite its name, this is not being done on silicon, says CEO Chris Hanson. Nor is the device something you could carry or wear. ‘Mobile and wearable are a decade away at least,’ he says.

Partly, the problem is that we still don’t understand well how humans and animals detect and interpret smells. The Nobel prize for understanding the principles of olfaction, or smell, was awarded only 12 years ago.

‘The biology of olfaction is still a frontier of science, very connected to the frontier of neuroscience,’ says Edwards, the Harvard chemical engineer.

MORE PUSH THAN PULL

That leaves start-ups reaching for lower-hanging fruit.

Snodgrass is funding a start-up called Tzoa, a wearable that measures air quality. He says interest in this from polluted China is particularly strong. Another, Nima, raised $9 million last month to build devices that can test food for proteins and substances, including gluten, peanuts and milk. Its first product will be available shortly, the company says.     For now, mobile phones are more likely to deliver smells than detect them. Edwards’ Vapor Communications, for example, in April launched Cyrano, a tub-sized cylinder that users can direct to emit scents from a mobile app – in the same way iTunes or Spotify directs a speaker to emit sounds.

Japanese start-up Scentee is revamping its scent-emitting smartphone module, says co-founder Koki Tsubouchi, shifting focus from sending scent messages to controlling the fragrance of a room.

There may be scepticism – history and cinemas are littered with the residue of failed attempts to introduce smell into our lives going back to the 1930s – but companies sniff a revival.

Dutch group Philips (PHG.AS) filed a recent patent for a device that would influence, or prime, users’ behavior by stimulating their senses, including through smell. Nike (NKE.N) filed something similar, pumping scents through a user’s headphones or glasses to improve performance.

The holy grail, though, remains sensing smells.

Samsung Electronics (005930.KS) was recently awarded a patent for an olfactory sensor that could be incorporated into any device, from a smartphone to an electronic tattoo.

One day these devices will be commonplace, says Avery Gilbert, an expert on scent and author of a book on the science behind it, gradually embedding specialized applications into our lives.

‘I don’t think you’re going to solve it all at once,’ he says.

(Reporting by Jeremy Wagstaff; Editing by Ian Geoghegan)

left5 of 5right Aryballe Technologies CEO Tristan Rousselle shows a prototype of NeOse, a universal portable odour detection device connected to a database on a mobile phone, in Paris, April 20, 2016.”

Apple Takes on Evernote?

Apple’s update to OSX allows users to import Evernote notes into Notes (if you see what I mean) painlessly and effectively: Import your notes and files to the Notes app.

As far as I know, this is the first time an app with some heft has included this capability — there are third party tools for OneNote, but no native functions. 

To me, this is the first serious challenge to Evernote, since why would you bother with Evernote if you’re an iOS and OSX user? 

There are limitations, I suspect. I can’t find any way to add tags and it seems the tags preserved in an enex/xml file are lost on import. That’s a showstopper for me. And of course some of the deeper features of Evernote aren’t there — saved searches and what have you. And if you use Android and/or Windows this is not going to help you.

But I suspect the bigger thing for most heavy users will be a sigh of relief that a player like Apple sees it worthwhile to add this feature. For many users there’s been growing disquiet as to just how  long ‘Ever’ means for the company, and the ramifications for their vast Evernote collections. 

Uber, $70 bln company, doesn’t seem to test some of its code

Screenshot 2015 12 21 07 40 15

Growing pains, I guess, but this should not be what big disruptive companies look like. I noticed that Uber’s web app offers filters to create lists of historical data — your rides — via criteria like which credit card you used, the city you took the ride in, the month etc. Great for expenses. Except it doesn’t work. The filters simply don’t work.

Uber have confirmed it and said they’re working on it. (It’s still not working.) But for a feature like this, wouldn’t you have done even basic testing, like, well, to see that it worked? 

 

Making 3-D objects disappear

I’m a big fan of invisibility cloaks, partly cos they’re cool, and partly because I think the principles behind them could end up in a lot of things. Here’s another step forward: Making 3-D objects disappear:

“Making 3-D objects disappear: Ultrathin invisibility cloak created Date: September 17, 2015 Source: DOE/Lawrence Berkeley National Laboratory Summary: Researchers have devised an ultra-thin invisibility ‘skin’ cloak that can conform to the shape of an object and conceal it from detection with visible light. Although this cloak is only microscopic in size, the principles behind the technology should enable it to be scaled-up to conceal macroscopic items as well.”

(Via ScienceDaily.)

Deja Vu or New Dawn? Microsoft’s Acquisition Binge

I’m not quite sure what to make of these acquisitions. It reminds me of Yahoo’s binge 10 years ago: After del.icio.us, a Directory of Other Things Yahoo! Should Buy. They snagged up a lot of my favourite stuff back then, and Microsoft is doing the same thing with Sunrise etc: 

Welcome 6Wunderkinder! Microsoft acquires Wunderlist – The Official Microsoft Blog: “What’s better than completing that last important task on your to-do list? Doing so with a beautiful and useful productivity app. Today, I am thrilled to announce that Microsoft has acquired 6Wunderkinder, the creator of the highly acclaimed to-do list app, Wunderlist.

The addition of Wunderlist to the Microsoft product portfolio fits squarely with our ambition to reinvent productivity for a mobile-first, cloud-first world. Building on momentum for Microsoft Office, OneNote and Skype for Business, as well as the recent Sunrise and Acompli acquisitions, it further demonstrates Microsoft’s commitment to delivering market leading mobile apps across the platforms and devices our customers use – for mail, calendaring, messaging, notes and now tasks.”

One Microsoft person told me when I complained about little work had been done on Skype that “we’re listening to users who said ‘don’t fiddle’ with it.” All well and good, but they could have fixed the more ridiculous things, like not being able to disable birthday notifications in some versions of the app, and losing the plot on groups. 

Still, this might be a new Microsoft, not the old Microsoft or Yahoo! doing these new acquisitions. They’ve done a lovely job integrating Acompli. So maybe there’s hope. I don’t mind these things getting that kind of treatment so long as they do it to reach out to users, rather than to fence them in. That’s going to take quite a change of attitude up in Redmond. 

The path to a wearable future lies in academia | Reuters

The path to a wearable future lies in academia | Reuters:

My oblique take on wearables

IMG 0563

For a glimpse of what is, what might have been and what may lie ahead in wearable devices, look beyond branded tech and Silicon Valley start-ups to the messy labs, dry papers and solemn conferences of academia.

There you’d find that you might control your smartphone with your tongue, skin or brain; you won’t just ‘touch’ others through a smart Watch but through the air; and you’ll change how food tastes by tinkering with sound, weight and color.

Much of today’s wearable technology has its roots in these academic papers, labs and clunky prototypes, and the boffins responsible rarely get the credit some feel they deserve.

Any academic interested in wearable technology would look at today’s commercial products and say ‘we did that 20 years ago,’ said Aaron Quigley, Chair of Human Interaction at University of St. Andrews in Scotland.

Take multi-touch – where you use more than one finger to interact with a screen: Apple (AAPL.O) popularized it with the iPhone in 2007, but Japanese academic Jun Rekimoto used something similar years before.

And the Apple Watch? Its Digital Touch feature allows you to send doodles, ‘touches’ or your heartbeat to other users. Over a decade ago, researcher Eric Paulos developed something very similar, called Connexus, that allowed users to send messages via a wrist device using strokes, taps and touch.

‘I guess when we say none of this is new, it’s not so much trashing the product,’ says Paul Strohmeier, a researcher at Ontario’s Human Media Lab, ‘but more pointing out that this product has its origins in the research of scientists who most people will never hear of, and it’s a way of acknowledging their contributions.’

VAMBRACES, KIDS’ PYJAMAS

Those contributions aren’t all pie-in-the-sky.

Strohmeier and others are toying with how to make devices easier to interact with. His solution: DisplaySkin, a screen that wraps around the wrist like a vambrace, or armguard, adapting its display relative to the user’s eyeballs.

Other academics are more radical: finger gestures in the air, for example, or a ring that knows which device you’ve picked up and automatically activates it. Others use the surrounding skin – projecting buttons onto it or pinching and squeezing it. Another glues a tiny touchpad to a fingernail so you can scroll by running one finger over another.

Then there’s connecting to people, rather than devices.

Mutual understanding might grow, researchers believe, by conveying otherwise hidden information: a collar that glows if the wearer has, say, motion sickness, or a two-person seat that lights up when one occupant has warm feelings for the other.

And if you could convey non-verbal signals, why not transmit them over the ‘multi-sensory Internet’? Away on business? Send a remote hug to your child’s pyjamas; or deliver an aroma from one phone to another via a small attachment; or even, according to researchers from Britain at a conference in South Korea last month, transmit tactile sensations to another person through the air.

And if you can transmit senses, why not alter them?

Academics at a recent Singapore conference focused on altering the flavor of food. Taste, it seems, is not just a matter of the tongue, it’s also influenced by auditory, visual and tactile cues. A Japanese team made food seem heavier, and its flavor change, by secretly adding weights to a fork, while a pair of British academics used music, a virtual reality headset and color to make similar food seem sourer or sweeter to the eater.

MAKING THE GRADE

It’s hard to know just which of these research projects might one day appear in your smartphone, wearable, spoon or item of clothing. Or whether any of them will.

‘I don’t think I’m exaggerating when I say that 99 percent of research work does not end up as ‘product’,’ says Titus Tang, who recently completed a PhD at Australia’s Monash University, and is now commercializing his research in ubiquitous sensing for creating 3D advertising displays. ‘It’s very hard to predict what would turn out, otherwise it wouldn’t be called research.’

But the gap is narrowing between the academic and the commercial.

Academics at the South Korean conference noted that with tech companies innovating more rapidly, ‘while some (academic) innovations may truly be decades ahead of their time, many (conference) contributions have a much shorter lifespan.’

‘Most ‘breakthroughs’ today are merely implementations of ideas that were unimplementable in that particular time. It took a while for industry to catch up, but now they are almost in par with academic research,’ says Ashwin Ashok of Carnegie Mellon.

Pranav Mistry, 33, has risen from a small town in India’s Gujarat state to be director of research at Samsung America (005930.KS). His Singapore conference keynote highlighted a Samsung project where a camera ‘teleports’ viewers to an event or place, offering a real-time, 3D view.

But despite a glitzy video, Samsung logo and sleek black finish, Mistry stressed it wasn’t the finished product.

He was at the conference, he told Reuters, to seek feedback and ‘work with people to make it better.’

(Editing by Ian Geoghegan)”

Reuters: Making cars safer: have the driver do less

A piece I wrote for Reuters. BBC version here

Making cars safer: have the driver do less

By Jeremy Wagstaff

SINGAPORE Tue Nov 11, 2014 4:00pm EST

Nov 12 (Reuters) – As millions of cars are under recall for potentially lethal air bags, designers are trying to reduce the need for the device – using sensors, radar, cameras and lasers to prevent collisions in the first place.

With driver error blamed for over 90 percent of road accidents, the thinking is it would be better to have them do less of the driving. The U.S.-based Insurance Institute for Highway Safety found that forward-collision warning systems cut vehicle-to-vehicle crashes by 7 percent – not a quantum leap, but a potential life saver. Nearly 31,000 people died in car accidents in 2012 in the United States alone.

“Passive safety features will stay important, and we need them. The next level is now visible. Autonomous driving for us is clearly a strategy to realise our vision for accident-free driving,” said Thomas Weber, global R&D head at Mercedes-Benz .

While giving a computer full control of a car is some way off, there’s a lot it can do in the meantime.

For now, in some cars you can take your foot off the pedal and hands off the wheel in slow-moving traffic, and the car will keep pace with the vehicle in front; it can jolt you awake if it senses you’re nodding off; alert you if you’re crossing into another lane; and brake automatically if you don’t react to warnings of a hazard ahead.

How close this all comes to leaving the driver out of the equation was illustrated by an experiment at Daimler last year: adding just a few off-the-shelf components to an S-class Mercedes, a team went on a 100 km (62 mile) ride in Germany without human intervention. “The project was about showing how far you can go, not just with fancy lasers, but with stuff you can buy off the shelf,” said David Pfeiffer, one of the team.

Such features, however, require solving thorny problems, including how to avoid pedestrians.

While in-car cameras are good at identifying and classifying objects, they don’t work so well in fog or at night. Radar, on the other hand, can calculate the speed, distance and direction of objects, and works well in limited light, but can’t tell between a pedestrian and a pole. While traffic signs are stationary and similar in shape, people are often neither.

For a better fix on direction there’s LiDAR – a combination of light and radar – which creates a picture of objects using lasers. Velodyne’s sensors on Google’s autonomous car, for example, use up to 64 laser beams spinning 20 times per second to create a 360-degree, 3D view of up to several hundred metres around the car.

Mercedes’ ‘Stop-and-Go Pilot’ feature matches the speed of the car in front in slow traffic and adjusts steering to stay in lane using two ultrasonic detectors, five cameras and six radar sensors. “This technology is a first major step,” said R&D chief Weber. “(However distracted the driver is), the system mitigates any accident risk in front.”

HOLY GRAIL

The next stage, experts say, is a road network which talks to cars, and where cars talk to other cars. General Motors has said its 2017 Cadillac CTS will transmit and receive location, direction and speed data with oncoming vehicles via a version of Wi-Fi.

Other approaches include using cameras to monitor the driver. Abdelaziz Khiat, at Nissan Motor’s research centre in Japan, uses cameras to track the driver’s face to detect yawns, a drooping head suggesting drowsiness, or frowns that may indicate the onset of road rage.

These advanced safety features are fine – if you can afford them. The Insurance Institute survey found that the forward collision warning systems were available in fewer than one in every 20 registered vehicles in 2012.

In key markets across emerging Asia, says Klaus Landhaeusser, regional head of government relations at Bosch , many first-time car buyers don’t want to spend more than $2,500. For that, he said, “you won’t be able to introduce any safety features.”

Road conditions are also key. “It will be a long time before we have software and algorithms that can see everything happening” on the roads in emerging markets, said Henrik Kaar, at auto safety equipment market leader Autoliv Inc.

And not everyone welcomes this progress. Some drivers complain the technology is intrusive, or is inconsistent. “If a safety feature is seen as intrusive or bothersome, a driver may try to circumvent or disable it,” said Chris Hayes, a vice president at insurer Travelers.

The key appears to be ensuring that while humans remain in charge of the vehicle, they have good information and features that correct the errors they make.

“For a long time, people thought it was an all-or-nothing jump between humans in charge and fully autonomous vehicles,” said Michael James, senior research scientist at Toyota Motor’s U.S. technical centre. “I don’t think that’s the case anymore. People see it as a more gradual transition.”

 

(Additional reporting by Norihiko Shirouzu; Editing by Ian Geoghegan)