2017 Predictions

This piece was written for the BBC World Service’s Business Daily.

This year is going to be an interesting one, but in technology it’s going to be particularly so. Social media is going to see some reverses, as users start to wake up to the compromises they make in sharing information with companies, governments and the world. But the real progress is going to be making our machines understand us better, in ways that we want.

Artificial intelligence: you’re no doubt rolling your eyes at the phrase, given how many times this technology has been promised as being around the next corner. I’m with you. But I think the focus has been on the wrong place: voice. Apple’s Siri has not been a huge success — except for my daughter, who loves talking to an adult she can be rude to — and Amazon’s Alexa, though impressive, is going to confine itself to those places where we feel comfortable talking to machines: the home.

That makes it inherently limited. Ours is actually a largely text-based world — we still use email, we prefer to text, or Whatsapp our friends, and this is where AI is going to be most useful. I already use an AI assistant called Evie to schedule my appointments; she parses emails I send her and, with a little human help, sets up meetings and calls on my behalf. I save an hour or so a week.

Expect to see more of this: using natural language – the way we usually write — to interact with devices, not via special apps but via whatever channels we already use. It’s our devices — fridges, computers, databases — that have to learn our language and preferred medium, not the other way around. AI will be a success if it can master this, and this year will be key.

Indeed, the same principle will be applied elsewhere: removing the machine-like elements of our interactions. AI will help us talk to machines better, but machines will also help immerse us in experiences. Pokemon Go, the mobile app that led many people astray catching and battling weird critters, was a hit because it took a decade-old technology, augmented reality, and bolted it onto something that people actually found useful. Well, not useful, exactly, but compelling.

Augmented reality took technology into the real world, and gave it an enticing layer. The next step — using technology to shrink the distance between people and the real world. Optimists are calling it teleportation — moving you to places you wouldn’t normally go, or can’t go. That could be a 360 degree video from a live event, or drones filming from way above you, or even experiencing something akin to physical touch with someone whose far away from you. A Singapore startup offers a remote kissing machine, which it of course has called the Kissenger.

Industry is getting excited about this because it sees the possibility of creating a digital twin of a real world device — a turbine say — and then manage and experiment on that digital version of the real thing. A Malaysian company does something similar with corpses — scanning the deceased so that post-mortems can be conducted digitally. The original body is left untouched — which may please relatives, but also means the number of post-mortems can be limitless, and performed by someone on the other side of the world.

All of this technology is available now, but it still takes some vision and money to bring it to market. But what people want is clear enough: technology should bring people closer to each other and their machines, but stay out of the way as much as possible. We may not successfully wean ourselves off our mobile screens any time soon, but we could at least make what we see, hear, and do on those screens as useful, exciting and human as possible.

Xiaomi Goes Virtually Edgeless By Using Ultrasound

NewImage

Regular readers will know I’ve been looking out for this to happen for a while: the use of sound, or rather ultrasound, as a form of interface. Here’s a Reuters piece I did on it a year ago:  From pixels to pixies: the future of touch is sound | Reuters:

Ultrasound – inaudible sound waves normally associated with cancer treatments and monitoring the unborn – may change the way we interact with our mobile devices.

But the proof will be in the pudding, I reckoned:

Perhaps the biggest obstacle to commercialising mid-air interfaces is making a pitch that appeals not just to consumers’ fantasies but to the customer’s bottom line.

Norwegian start-up Elliptic Labs, for example, says the world’s biggest smartphone and appliance manufacturers are interested in its mid-air gesture interface because it requires no special chip and removes the need for a phone’s optical sensor.

Elliptic CEO Laila Danielsen says her ultrasound technology uses existing microphones and speakers, allowing users to take a selfie, say, by waving at the screen.

Gesture interfaces, she concedes, are nothing new. Samsung Electronics had infra-red gesture sensors in its phones, but says “people didn’t use it”.

Danielsen says her technology is better because it’s cheaper and broadens the field in which users can control their devices.

That day has happened. Xiaomi’s new MIX phone, Elliptic Labs says, is the first smartphone to use their Ultrasound Proximity Software:

INNER BEAUTY replaces the phone’s hardware proximity sensor with ultrasound software and allows the speaker to be completely invisible, extending the functional area of the screen all the way to the top edge of the phone.

Until now, all smartphones required an optical infrared hardware proximity sensor to turn off the screen and disable the touch functionality when users held the device up to their ear.

Without the proximity sensor, a user’s ear or cheek could accidentally trigger actions during a call, such as hanging up the call or dialing numbers while the call is ongoing.

However, INNER BEAUTY — built on Elliptic Labs’ BEAUTY ultrasound proximity software — uses patented algorithms not only to remove the proximity sensor, but also to hide the speaker behind the phone’s glass screen.

Besides eliminating the unsightly holes on a phone’s screen, Elliptic Labs’ technology eliminates common issues with hardware proximity sensors, such as their unreliability in certain weather conditions or in response to various skin colors as well as dark hair.

This is a good first step. The point here of course, for the company, is that they can push the display right to the top, which definitely looks nice (the front-facing camera, if you’re wondering, is now at the bottom.) But the use of ultrasound has lots of interesting implications — not least for how we interact with our phones. If gestures work, rather than just say they work, it will make interacting with other devices as interesting, maybe more interesting, than voice.

Jack’s Hit: Apple’s Missing Socket

There’s been a lot of talk about the removal of the iPhone’s audio jack, most of it knee-jerk, albeit sometimes amusing. A sampling:

I’m no fan-boi, but I find most of this coverage small-minded. Yes, I get that there’s a potential inconvenience here:

  • if you don’t have the lightning-jack adapter, then you can’t use your existing earphones. 
  • Yes, Apple is prodding you in the direction of its expensive wireless AirPods. 
  • Yes, wireless tech is not quite as ready as it could be for the pairing to be seamless. 
  • Yes, these things are easy to lose.
  • Yes, using the headphone and charging at the same time is not going to be possible without some adapter. (This is an oversight, I agree.) 
  • yes, Apple makes more money, because it owns the lightning connector and makes maybe $4 off each device that uses it. (Yes, I don’t like this either. But the wireless 

But two years down the track these kinds of arguments will seem as anachronistic as those that lamented the phasing out of the floppy drive, the serial port, the parallel port, the CD/DVD-rom drive, its own Firewire and 30 pin connectors. (The ultimate Apple I/O death chart – The Verge)

Oddly, both the arguments by Apple and its supporters are also somewhat limited in their horizons. Apple argues that it needs more space inside the device to pack more goodies in. That the technology itself is more than 100 years old. That it makes it easier to waterproof the device. That audio via Lightning or wireless is actually as good as, if not a better, experience. Apple has talked about being courageous, which is a tad disingenuous: brave is risking everything on a startup, not when you’ve got $200 billion sitting around.

The real reason why being pro-jack is going to seem a little Luddite in the future is that the future is not just wireless, it’s deviceless. The smart watch tried (and in my view failed) to move the functionality of the smartphone to the wrist. It’s not a natural place for that functionality to be, because you’re still looking at, and tapping on a screen. It’s just smaller, closer to your face and strapped on. Same with Google Glass. Nice idea, but you’re still looking at a screen, and people hate you.

The device should disappear, all of its features — input, output — internalised. Preferably inside the body. But we can’t do that quite yet, hence the earbud. A good earbud should be both controller and receptor. That’s where we’re going. This is what I wrote for Reuters on the subject. Here’s what I said on Reuters TV.

Nothing too revolutionary here. It only seems so because the debate around jack’s hit has been so mundane, so parochial, as if technology should stand still, and technology companies should listen solely to their users. The phrase ‘faster horse’ springs to mind. Apple isn’t even leading the field on this. There are at least three other smartphone companies which have already ditched the audio jack — Oppo did it four years ago.

We’ll look back at the folk who protested the disappearance of the jack as slightly quaint folk who didn’t get it. Everything leads inexorably towards breaking down the barriers between us and the technology we use — until eventually it is inside our skull. Next to it is close enough for now. 

Hence Ben Thompson, who nailed it with this piece Beyond the iPhone, saying that this wireless, deviceless future is one which may not involve much of Apple at all. 

To Apple’s credit they are, with the creation of AirPods, laying the foundation for a world beyond the iPhone. It is a world where, thanks to their being a product — not services — company, Apple is at a disadvantage; however, it is also a world that Apple, thanks to said product expertise, especially when it comes to chips, is uniquely equipped to create. That the company is running towards it is both wise — the sooner they get there, the longer they have to iterate and improve and hold off competitors — and also, yes, courageous. The easy thing would be to fight to keep us in a world where phones are all that matters, even if, in the long run, that would only prolong the end of Apple’s dominance.

In that sense, Apple has never stood in the way of its own destruction. Yes, it has penny pinched — taxing accessory makers, avoiding taxes elsewhere, squeezing suppliers — but it has not shied away from making these bigger decisions. What is interesting is that in this new world to come it may be at a disadvantage. 

BBC World Service – Smell tech

At the end of this program is my piece on smell technology, if you like that kind of thing. BBC World Service – Business Daily, UK FinTech Mulls a Post-Brexit Future (with everything else going on it might seem a bit flippant, or maybe light relief. 

Can the UK’s financial technology or FinTech sector maintain its global lead after Brexit? We speak to Lawrence Wintermeyer, the chairman of the industry’s trade body Innovate Finance, about what he hopes the British government will negotiate in a new deal with the EU. Also, Michael Pettis, professor of finance at Peking University, tells us what Brexit looks like from China and why financial markets have been resilient to the initial shock of the referendum’s result. Plus, what’s the point of a smart phone that can smell? Jeremy Wagstaff, Thomson Reuters’ chief technology correspondent for Asia, says you may be surprise. 

Nose job: smells are smart sensors’ last frontier | Reuters

My piece for Reuters about the technology of smell: Nose job: smells are smart sensors’ last frontier | Reuters. A video version is here.

Nose job: smells are smart sensors’ last frontier

SINGAPORE | BY JEREMY WAGSTAFF

Phones or watches may be smart enough to detect sound, light, motion, touch, direction, acceleration and even the weather, but they can’t smell.

That’s created a technology bottleneck that companies have spent more than a decade trying to fill. Most have failed.

A powerful portable electronic nose, says Redg Snodgrass, a venture capitalist funding hardware start-ups, would open up new horizons for health, food, personal hygiene and even security.

Imagine, he says, being able to analyze what someone has eaten or drunk based on the chemicals they emit; detect disease early via an app; or smell the fear in a potential terrorist. ‘Smell,’ he says, ‘is an important piece’ of the puzzle.

It’s not through lack of trying. Aborted projects and failed companies litter the aroma-sensing landscape. But that’s not stopping newcomers from trying.

Like Tristan Rousselle’s Grenoble-based Aryballe Technologies, which recently showed off a prototype of NeOse, a hand-held device he says will initially detect up to 50 common odors. ‘It’s a risky project. There are simpler things to do in life,’ he says candidly.

MASS, NOT ENERGY

The problem, says David Edwards, a chemical engineer at Harvard University, is that unlike light and sound, scent is not energy, but mass. ‘It’s a very different kind of signal,’ he says.

That means each smell requires a different kind of sensor, making devices bulky and limited in what they can do. The aroma of coffee, for example, consists of more than 600 components.

France’s Alpha MOS was first to build electronic noses for limited industrial use, but its foray into developing a smaller model that would do more has run aground. Within a year of unveiling a prototype for a device that would allow smartphones to detect and analyze smells, the website of its U.S.-based arm Boyd Sense has gone dark. Neither company responded to emails requesting comment.

The website of Adamant Technologies, which in 2013 promised a device that would wirelessly connect to smartphones and measure a user’s health from their breath, has also gone quiet. Its founder didn’t respond to emails seeking comment.

For now, start-ups focus on narrower goals or on industries that don’t care about portability.

California-based Aromyx, for example, is working with major food companies to help them capture a digital profile for every odor, using its EssenceChip. Wave some food across the device and it captures a digital signature that can be manipulated as if it were a sound or image file.

But, despite its name, this is not being done on silicon, says CEO Chris Hanson. Nor is the device something you could carry or wear. ‘Mobile and wearable are a decade away at least,’ he says.

Partly, the problem is that we still don’t understand well how humans and animals detect and interpret smells. The Nobel prize for understanding the principles of olfaction, or smell, was awarded only 12 years ago.

‘The biology of olfaction is still a frontier of science, very connected to the frontier of neuroscience,’ says Edwards, the Harvard chemical engineer.

MORE PUSH THAN PULL

That leaves start-ups reaching for lower-hanging fruit.

Snodgrass is funding a start-up called Tzoa, a wearable that measures air quality. He says interest in this from polluted China is particularly strong. Another, Nima, raised $9 million last month to build devices that can test food for proteins and substances, including gluten, peanuts and milk. Its first product will be available shortly, the company says. For now, mobile phones are more likely to deliver smells than detect them. Edwards’ Vapor Communications, for example, in April launched Cyrano, a tub-sized cylinder that users can direct to emit scents from a mobile app – in the same way iTunes or Spotify directs a speaker to emit sounds.

Japanese start-up Scentee is revamping its scent-emitting smartphone module, says co-founder Koki Tsubouchi, shifting focus from sending scent messages to controlling the fragrance of a room.

There may be scepticism – history and cinemas are littered with the residue of failed attempts to introduce smell into our lives going back to the 1930s – but companies sniff a revival.

Dutch group Philips filed a recent patent for a device that would influence, or prime, users’ behavior by stimulating their senses, including through smell. Nike filed something similar, pumping scents through a user’s headphones or glasses to improve performance.

The holy grail, though, remains sensing smells.

Samsung Electronics was recently awarded a patent for an olfactory sensor that could be incorporated into any device, from a smartphone to an electronic tattoo.

One day these devices will be commonplace, says Avery Gilbert, an expert on scent and author of a book on the science behind it, gradually embedding specialized applications into our lives.

‘I don’t think you’re going to solve it all at once,’ he says.

iPad Pro Thoughts

Jean-Louis Gassée again hits the right note in his piece on the iPad Pro: Wrong Questions | Monday Note. Tim Cook shouldn’t go around saying it will replace the laptop. It might for him, but the laptop/PC has evolved to be used in myriad ways, not all of which are best suited to a big screen and unwieldy, optional keyboard. 

Why not say that the iPad Pro will helpfully replace a laptop for 60%, or 25% of conventional personal computer users? In keeping with Steve Jobs’ Far Better At Some Key Things formula, why not say that the iPad Pro is a great laptop replacement for graphic designers, architects, mechanical engineers, musicians, videographers…and that the audience will grow even larger as new and updated apps take advantage of the iPad Pro’s screen size, speed, and very likable Pencil.

And it’s not just that. Taking up his and others’ theme that at each stage of hardware evolution we’ve lacked the imagination to realise what these devices might best be used for, I imagine the big screen and power of the iPad Pro will yield uses that we so far have not considered. 

As with wearables, these devices are as much about creating (this is something I’ve never been able to do before) or extending new markets (I could do this before, but it wasn’t much fun) as anything else. I’m not about to replace my laptop with an iPad Pro, but I could see a lot of things I would love to do with it — music editing, photo editing and organising, and maybe a bit of doodling. As in Horace Dediu’s video  The new iPad is like nothing we’ve ever seen before there’s lots of great visualization possibilities too. 

Is it a work tool? Could be, for some industries. It’s not a very mobile beast. 

The question is: while developers see enough reward in supporting it with apps? 

From pixels to pixies: the future of touch is sound

My piece on using sound and lasers to create 3-dimensional interfaces. It’s still some ways off, but it’s funky.

Screenshot 2015 10 01 10 49 33

Screenshot from Ultrahaptics video demo

From pixels to pixies: the future of touch is sound | Reuters:

SINGAPORE | BY JEREMY WAGSTAFF

(The video version: The next touchscreen is sound you can feel | Reuters.com)

Ultrasound – inaudible sound waves normally associated with cancer treatments and monitoring the unborn – may change the way we interact with our mobile devices.

Couple that with a different kind of wave – light, in the form of lasers – and we’re edging towards a world of 3D, holographic displays hovering in the air that we can touch, feel and control.

UK start-up Ultrahaptics, for example, is working with premium car maker Jaguar Land Rover [TAMOJL.UL] to create invisible air-based controls that drivers can feel and tweak. Instead of fumbling for the dashboard radio volume or temperature slider, and taking your eyes off the road, ultrasound waves would form the controls around your hand.

‘You don’t have to actually make it all the way to a surface, the controls find you in the middle of the air and let you operate them,’ says Tom Carter, co-founder and chief technology offjauiclinkeer of Ultrahaptics.

Such technologies, proponents argue, are an advance on devices we can control via gesture – like Nintendo’s Wii or Leap Motion’s sensor device that allows users to control computers with hand gestures. That’s because they mimic the tactile feel of real objects by firing pulses of inaudible sound to a spot in mid air.

They also move beyond the latest generation of tactile mobile interfaces, where companies such as Apple and Huawei [HWT.UL] are building more response into the cold glass of a mobile device screen.

Ultrasound promises to move interaction from the flat and physical to the three dimensional and air-bound. And that’s just for starters.

By applying similar theories about waves to light, some companies hope to not only reproduce the feel of a mid-air interface, but to make it visible, too.

Japanese start-up Pixie Dust Technologies, for example, wants to match mid-air haptics with tiny lasers that create visible holograms of those controls. This would allow users to interact, say, with large sets of data in a 3D aerial interface.

‘It would be like the movie ‘Iron Man’,’ says Takayuki Hoshi, a co-founder, referencing a sequence in the film where the lead character played by Robert Downey Jr. projects holographic images and data in mid-air from his computer, which he is then able to manipulate by hand.

BROKEN PROMISES

Japan has long been at the forefront of this technology. Hiroyuki Shinoda, considered the father of mid-air haptics, said he first had the idea of an ultrasound tactile display in the 1990s and filed his first patent in 2001.

His team at the University of Tokyo is using ultrasound technology to allow people to remotely see, touch and interact with things or each other. For now, the distance between the two is limited by the use of mirrors, but one of its inventors, Keisuke Hasegawa, says this could eventually be converted to a signal, making it possible to interact whatever the distance.

For sure, promises of sci-fi interfaces have been broken before. And even the more modest parts of this technology are some way off. Lee Skrypchuk, Jaguar Land Rovers’ Human Machine Interface Technical Specialist, said technology like Ultrahaptics’ was still 5-7 years away from being in their cars.

And Hoshi, whose Pixie Dust has made promotional videos of people touching tiny mid-air sylphs, says the cost of components needs to fall further to make this technology commercially viable. ‘Our task for now is to tell the world about this technology,’ he says.

Pixie Dust is in the meantime also using ultrasound to form particles into mid-air shapes, so-called acoustic levitation, and speakers that direct sound to some people in a space and not others – useful in museums or at road crossings, says Hoshi.

FROM KITCHEN TO CAR

But the holy grail remains a mid-air interface that combines touch and visuals.

Hoshi says touching his laser plasma sylphs feels like a tiny explosion on the fingertips, and would best be replaced by a more natural ultrasound technology.

And even laser technology itself is a work in progress.

Another Japanese company, Burton Inc, offers live outdoor demonstrations of mid-air laser displays fluttering like fireflies. But founder Hidei Kimura says he’s still trying to interest local governments in using it to project signs that float in the sky alongside the country’s usual loudspeaker alerts during a natural disaster.

Perhaps the biggest obstacle to commercializing mid-air interfaces is making a pitch that appeals not just to consumers’ fantasies but to the customer’s bottom line.

Norwegian start-up Elliptic Labs, for example, says the world’s biggest smartphone and appliance manufacturers are interested in its mid-air gesture interface because it requires no special chip and removes the need for a phone’s optical sensor.

Elliptic CEO Laila Danielsen says her ultrasound technology uses existing microphones and speakers, allowing users to take a selfie, say, by waving at the screen.

Gesture interfaces, she concedes, are nothing new. Samsung Electronics had infra-red gesture sensors in its phones, but says ‘people didn’t use it’.

Danielsen says her technology is better because it’s cheaper and broadens the field in which users can control their devices. Next stop, she says, is including touchless gestures into the kitchen, or cars.

(Reporting by Jeremy Wagstaff; Editing by Ian Geoghegan)

Cook: 3D Touch a Game Changer

I think 3D Touch is the most important thing that Apple has done for a while, and I think as with all such things we don’t really see it until later. Cook seems to agree: 20 Minutes With Tim Cook – BuzzFeed News:

“But he’s most excited by 3D Touch. ‘I personally think 3D Touch is a game changer,’ he says. ‘I find that my efficiency is way up with 3D touch, because I can go through so many emails so quickly. It really does cut out a number of navigational steps to get where you’re going.’ Even with just a quick demo, it’s easy to see his point. It’s a major new interface feature, one that threatens to upend the way we navigate through our phones, especially once third-party developers begin implementing it in their applications. Apple has engineered the hell out of this 3D Touch to ensure they’ll do just that.

For Cook, 3D Touch is a tentpole feature of not just the iPhone 6s series, but of the iPhone itself and one that shows the company isn’t saving marquee innovations for those ‘tick’ years. ‘As soon as products are ready we’re going to release them,’ Cook explains. ‘There’s no holding back. We’re not going to look at something and say ‘let’s let’s keep that one for next time.’ We’d rather ship everything we’ve got, and put pressure on ourselves to do something even greater next time.’”

Force field: Apple’s pressure-based screens promise a world beyond cold glass

A piece looking at the technology behind the pressure sensing. My prediction: once people play with it they’ll find it hard to go back to the old way of doing things. Maybe typing on an touchscreen may one day feel natural, and maybe even enjoyable. 

Force field: Apple’s pressure-based screens promise a world beyond cold glass | Reuters:

SINGAPORE/TAIPEI | BY JEREMY WAGSTAFF AND MICHAEL GOLD

By adding a more realistic sense of touch to its iPhone, Apple Inc may have conquered a technology that has long promised to take us beyond merely feeling the cold glass of our mobile device screens.

In its latest iPhones, Apple included what it calls 3D Touch, allowing users to interact more intuitively with their devices via a pressure-sensitive screen which mimics the feel and response of real buttons.

In the long run, the force-sensitive technology also promises new or better applications, from more lifelike games and virtual reality to adding temperature, texture and sound to our screens.

‘Force Touch is going to push the envelope of how we interact with our screens,’ says Joel Evans, vice president of mobile enablement at Mobiquity, a mobile consultancy.

The fresh iPhones, unveiled on Wednesday, incorporate a version of the Force Touch technology already in some Apple laptop touchpads and its watches. Apple also announced a stylus that includes pressure sensing technology.

As with previous forays, from touch screens to fingerprint sensors, Apple isn’t the first with this technology, but by combining some existing innovations with its own, it could leverage its advantage of control over hardware, interface and the developers who could wrap Force Touch into its apps.

‘Here we go again. Apple’s done it with gyroscopes, accelerometers, they did it with pressure sensors, they’ve done it with compass, they’ve been great at expediting the adoption of these sensors,’ said Ali Foughi, CEO of US-based NextInput, which has its own technology, trademarked ForceTouch. ‘Apple is at the forefront.’

TOUCHY FEELY

Haptic technology – a tactile response to touching an interface – isn’t new, even in mobile devices. Phones have long vibrated to alert users of incoming calls in silent mode, or when they touch an onscreen button.

But efforts to go beyond that have been limited.

BlackBerry incorporated pressure sensing into its Storm phone in 2008. And Rob Lacroix, vice president of engineering at Immersion Corp, said his company worked in 2012 with Fujitsu on the Raku-Raku Smartphone, an Android phone that could distinguish between a soft and firm touch to help users unfamiliar with handheld devices.

But most efforts have been hamstrung by either a poor understanding of the user’s needs, or technical limitations. A vibrating buzz, for instance, has negative connotations, causing most people to turn off any vibration feature, says James Lewis, CEO of UK-based Redux, which has been working on similar touch technology for several years.

The technology powering vibrations is also primitive, he said, meaning there’s a slight delay and a drain on the battery. Early versions of pressure-sensing technology also required a slight gap between screen and enclosure, leaving it vulnerable to the elements.

Apple seems to have solved such problems, experts said, judging from their trackpads and the Apple Watch. Indeed, the trackpad carries the same sensation of a physical click of its predecessors, but without the actual pad moving at all.

The result: In the short term, Force Touch may simply make interacting with a screen more like something we’d touch in real life – a light switch, say, or a physical keyboard. With Force Touch, the device should be able to tell not only whether we are pressing the screen, but how firmly. It should in turn respond with a sensation – not just a vibration, but with a click – even if that click is itself a trick of technology.

‘What we’re going to see initially is putting life back into dead display,’ said Redux’s Lewis. ‘We just got used to the cold feel of glass.’

HARD PRESSED

To be sure, mobile is not the first industry to flirt with haptics.

For example, for car drivers, Redux demonstrates a tablet-like display which creates the illusions of bumps and friction when you run your fingers over the glass, mimicking physical buttons and sliders so your eyes don’t need to leave the road.

Mobiquity’s technical adviser Robert McCarthy points to several potential uses of Apple’s technology – measuring the force of touch when entering a password, say, to indicate how confident the user is of their selection, or keying in a numeric passcode using different pressure levels as an extra layer of security.

While Apple’s adoption of the technology has awoken the mobile industry to its possibilities, it was pipped to the post by Chinese handset maker Huawei, which this month unveiled one model with what it also tagged Force Touch technology. Pressing harder in a photo app, for example, allows you to zoom in on a picture without the usual two-finger spread.

Other manufacturers are exploring how to make touching a device more friendly, and more advanced, says Freddie Liu, CFO of Taiwan-based TPK Holding Co Ltd, an Apple supplier.

‘This is just the beginning for Force Touch,’ he said.

(Reporting by Jeremy Wagstaff and Michael Gold, with additional reporting by Reiji Murai in TOKYO; Editing by Ian Geoghegan and Raju Gopalakrishnan)”

Factbox: iPhone 3D Touch suppliers and haptics companies | Reuters

Moleskines Redux

Moleskin ® redux

Of course, I claim a lot of the credit for this decade-long trend Why Startups Love Moleskines: 

“The notion that non-digital goods and ideas have become more valuable would seem to cut against the narrative of disruption-worshipping techno-utopianism coming out of Silicon Valley and other startup hubs, but, in fact, it simply shows that technological evolution isn’t linear. We may eagerly adopt new solutions, but, in the long run, these endure only if they truly provide us with a better experience—if they can compete with digital technology on a cold, rational level.”

I have returned to Moleskines recently, partly because I realised I have a cupboard full of them, and partly because of exactly this problem: there’s no digital equivalent experience. 

  • easier to conceptualise on paper
  • you can doodle when the speaker is waffling; those doodles embellish, even turn it into Mike Rohde’s sketchnotes
  • you can whip it out in places where an electronic device would be weird, or rude, or impractical;
  • there’s a natural timeline to your thoughts
  • there’s something sensual about having a pen in your hands and holding a notebook
  • pen and moleskine focus your thoughts and attention
  • the cost of the book acts as a brake on mindless note taking (writing stuff down without really thinking why) 
  • no mindmap software has ever really improved the mindmapping experience. 
There’s probably more to it. But maybe the point is that this isn’t a fad. People have been using these in the geeky community for more than a decade, suggesting that they have established themselves as a viable tool. Being able to easily digitise them — for saving, or processing, as I did this morning with a chart I sketched out which my graphics colleague wanted to poach from — is a bonus, and saves us from the fear of losing our work. 

(Via.Newley Purnell)