Narrowband Goes Broad

Seems LoRa is really taking off. Citing data from research firm Analysys Mason, Chris Donkin writes that 85 new networks were announced as live, in a trial phase or in development in 2016 compared with 29 in 2015.

While early LPWA deployments were concentrated in the US and Western Europe, Analysys Mason found interest in the technology spread during 2016, with strong traction being seen in the APAC market.

During 2015, two thirds of initiatives took place in the US and Western Europe whereas in 2016 the figure was down to less than a third. Simultaneously APAC showed growth from 4 per cent in 2015 to 30 per cent in 2016.

The report identified developments in Japan, Singapore, South Korea, Australia and New Zealand as being especially significant in the regional shift identified last year.
– via Mobile World Live

While a lot of these led by SigFox or operators using the NB-IoT standard — a stripped down 3G, more interesting, I think is the LoRa version, which actually provided the single largest group — 29 deployments vs 27 Sigfox.

The LoRa Alliance says 17 nationwide deployments have been publicly announced, and there are live networks in more than 150 cities. So I’m guessing AM’s numbers are somewhat conservative. The Things Network, an open source implementation of LoRa, boasts dozens of communities — people who are working on networks, however small — and while most are in Europe and the US, Australia is strong — Sydney’s Meshed Network Pty has installed five gateways around the city.

The author of the AM piece, Aris Xylouris, says “we can expect more announcements to be made before Mobile World Congress (MWC) 2017. It is likely that the launch of the first real commercial deployment of an NB-IoT network will be among the announcements at MWC 2017.”

Here’s my take from August on narrowband.

Xiaomi Goes Virtually Edgeless By Using Ultrasound

NewImage

Regular readers will know I’ve been looking out for this to happen for a while: the use of sound, or rather ultrasound, as a form of interface. Here’s a Reuters piece I did on it a year ago:  From pixels to pixies: the future of touch is sound | Reuters:

Ultrasound – inaudible sound waves normally associated with cancer treatments and monitoring the unborn – may change the way we interact with our mobile devices.

But the proof will be in the pudding, I reckoned:

Perhaps the biggest obstacle to commercialising mid-air interfaces is making a pitch that appeals not just to consumers’ fantasies but to the customer’s bottom line.

Norwegian start-up Elliptic Labs, for example, says the world’s biggest smartphone and appliance manufacturers are interested in its mid-air gesture interface because it requires no special chip and removes the need for a phone’s optical sensor.

Elliptic CEO Laila Danielsen says her ultrasound technology uses existing microphones and speakers, allowing users to take a selfie, say, by waving at the screen.

Gesture interfaces, she concedes, are nothing new. Samsung Electronics had infra-red gesture sensors in its phones, but says “people didn’t use it”.

Danielsen says her technology is better because it’s cheaper and broadens the field in which users can control their devices.

That day has happened. Xiaomi’s new MIX phone, Elliptic Labs says, is the first smartphone to use their Ultrasound Proximity Software:

INNER BEAUTY replaces the phone’s hardware proximity sensor with ultrasound software and allows the speaker to be completely invisible, extending the functional area of the screen all the way to the top edge of the phone.

Until now, all smartphones required an optical infrared hardware proximity sensor to turn off the screen and disable the touch functionality when users held the device up to their ear.

Without the proximity sensor, a user’s ear or cheek could accidentally trigger actions during a call, such as hanging up the call or dialing numbers while the call is ongoing.

However, INNER BEAUTY — built on Elliptic Labs’ BEAUTY ultrasound proximity software — uses patented algorithms not only to remove the proximity sensor, but also to hide the speaker behind the phone’s glass screen.

Besides eliminating the unsightly holes on a phone’s screen, Elliptic Labs’ technology eliminates common issues with hardware proximity sensors, such as their unreliability in certain weather conditions or in response to various skin colors as well as dark hair.

This is a good first step. The point here of course, for the company, is that they can push the display right to the top, which definitely looks nice (the front-facing camera, if you’re wondering, is now at the bottom.) But the use of ultrasound has lots of interesting implications — not least for how we interact with our phones. If gestures work, rather than just say they work, it will make interacting with other devices as interesting, maybe more interesting, than voice.

Nose job: smells are smart sensors’ last frontier | Reuters

My piece for Reuters about the technology of smell: Nose job: smells are smart sensors’ last frontier | Reuters. A video version is here.

Nose job: smells are smart sensors’ last frontier

SINGAPORE | BY JEREMY WAGSTAFF

Phones or watches may be smart enough to detect sound, light, motion, touch, direction, acceleration and even the weather, but they can’t smell.

That’s created a technology bottleneck that companies have spent more than a decade trying to fill. Most have failed.

A powerful portable electronic nose, says Redg Snodgrass, a venture capitalist funding hardware start-ups, would open up new horizons for health, food, personal hygiene and even security.

Imagine, he says, being able to analyze what someone has eaten or drunk based on the chemicals they emit; detect disease early via an app; or smell the fear in a potential terrorist. ‘Smell,’ he says, ‘is an important piece’ of the puzzle.

It’s not through lack of trying. Aborted projects and failed companies litter the aroma-sensing landscape. But that’s not stopping newcomers from trying.

Like Tristan Rousselle’s Grenoble-based Aryballe Technologies, which recently showed off a prototype of NeOse, a hand-held device he says will initially detect up to 50 common odors. ‘It’s a risky project. There are simpler things to do in life,’ he says candidly.

MASS, NOT ENERGY

The problem, says David Edwards, a chemical engineer at Harvard University, is that unlike light and sound, scent is not energy, but mass. ‘It’s a very different kind of signal,’ he says.

That means each smell requires a different kind of sensor, making devices bulky and limited in what they can do. The aroma of coffee, for example, consists of more than 600 components.

France’s Alpha MOS was first to build electronic noses for limited industrial use, but its foray into developing a smaller model that would do more has run aground. Within a year of unveiling a prototype for a device that would allow smartphones to detect and analyze smells, the website of its U.S.-based arm Boyd Sense has gone dark. Neither company responded to emails requesting comment.

The website of Adamant Technologies, which in 2013 promised a device that would wirelessly connect to smartphones and measure a user’s health from their breath, has also gone quiet. Its founder didn’t respond to emails seeking comment.

For now, start-ups focus on narrower goals or on industries that don’t care about portability.

California-based Aromyx, for example, is working with major food companies to help them capture a digital profile for every odor, using its EssenceChip. Wave some food across the device and it captures a digital signature that can be manipulated as if it were a sound or image file.

But, despite its name, this is not being done on silicon, says CEO Chris Hanson. Nor is the device something you could carry or wear. ‘Mobile and wearable are a decade away at least,’ he says.

Partly, the problem is that we still don’t understand well how humans and animals detect and interpret smells. The Nobel prize for understanding the principles of olfaction, or smell, was awarded only 12 years ago.

‘The biology of olfaction is still a frontier of science, very connected to the frontier of neuroscience,’ says Edwards, the Harvard chemical engineer.

MORE PUSH THAN PULL

That leaves start-ups reaching for lower-hanging fruit.

Snodgrass is funding a start-up called Tzoa, a wearable that measures air quality. He says interest in this from polluted China is particularly strong. Another, Nima, raised $9 million last month to build devices that can test food for proteins and substances, including gluten, peanuts and milk. Its first product will be available shortly, the company says. For now, mobile phones are more likely to deliver smells than detect them. Edwards’ Vapor Communications, for example, in April launched Cyrano, a tub-sized cylinder that users can direct to emit scents from a mobile app – in the same way iTunes or Spotify directs a speaker to emit sounds.

Japanese start-up Scentee is revamping its scent-emitting smartphone module, says co-founder Koki Tsubouchi, shifting focus from sending scent messages to controlling the fragrance of a room.

There may be scepticism – history and cinemas are littered with the residue of failed attempts to introduce smell into our lives going back to the 1930s – but companies sniff a revival.

Dutch group Philips filed a recent patent for a device that would influence, or prime, users’ behavior by stimulating their senses, including through smell. Nike filed something similar, pumping scents through a user’s headphones or glasses to improve performance.

The holy grail, though, remains sensing smells.

Samsung Electronics was recently awarded a patent for an olfactory sensor that could be incorporated into any device, from a smartphone to an electronic tattoo.

One day these devices will be commonplace, says Avery Gilbert, an expert on scent and author of a book on the science behind it, gradually embedding specialized applications into our lives.

‘I don’t think you’re going to solve it all at once,’ he says.

From pixels to pixies: the future of touch is sound

My piece on using sound and lasers to create 3-dimensional interfaces. It’s still some ways off, but it’s funky.

Screenshot 2015 10 01 10 49 33

Screenshot from Ultrahaptics video demo

From pixels to pixies: the future of touch is sound | Reuters:

SINGAPORE | BY JEREMY WAGSTAFF

(The video version: The next touchscreen is sound you can feel | Reuters.com)

Ultrasound – inaudible sound waves normally associated with cancer treatments and monitoring the unborn – may change the way we interact with our mobile devices.

Couple that with a different kind of wave – light, in the form of lasers – and we’re edging towards a world of 3D, holographic displays hovering in the air that we can touch, feel and control.

UK start-up Ultrahaptics, for example, is working with premium car maker Jaguar Land Rover [TAMOJL.UL] to create invisible air-based controls that drivers can feel and tweak. Instead of fumbling for the dashboard radio volume or temperature slider, and taking your eyes off the road, ultrasound waves would form the controls around your hand.

‘You don’t have to actually make it all the way to a surface, the controls find you in the middle of the air and let you operate them,’ says Tom Carter, co-founder and chief technology offjauiclinkeer of Ultrahaptics.

Such technologies, proponents argue, are an advance on devices we can control via gesture – like Nintendo’s Wii or Leap Motion’s sensor device that allows users to control computers with hand gestures. That’s because they mimic the tactile feel of real objects by firing pulses of inaudible sound to a spot in mid air.

They also move beyond the latest generation of tactile mobile interfaces, where companies such as Apple and Huawei [HWT.UL] are building more response into the cold glass of a mobile device screen.

Ultrasound promises to move interaction from the flat and physical to the three dimensional and air-bound. And that’s just for starters.

By applying similar theories about waves to light, some companies hope to not only reproduce the feel of a mid-air interface, but to make it visible, too.

Japanese start-up Pixie Dust Technologies, for example, wants to match mid-air haptics with tiny lasers that create visible holograms of those controls. This would allow users to interact, say, with large sets of data in a 3D aerial interface.

‘It would be like the movie ‘Iron Man’,’ says Takayuki Hoshi, a co-founder, referencing a sequence in the film where the lead character played by Robert Downey Jr. projects holographic images and data in mid-air from his computer, which he is then able to manipulate by hand.

BROKEN PROMISES

Japan has long been at the forefront of this technology. Hiroyuki Shinoda, considered the father of mid-air haptics, said he first had the idea of an ultrasound tactile display in the 1990s and filed his first patent in 2001.

His team at the University of Tokyo is using ultrasound technology to allow people to remotely see, touch and interact with things or each other. For now, the distance between the two is limited by the use of mirrors, but one of its inventors, Keisuke Hasegawa, says this could eventually be converted to a signal, making it possible to interact whatever the distance.

For sure, promises of sci-fi interfaces have been broken before. And even the more modest parts of this technology are some way off. Lee Skrypchuk, Jaguar Land Rovers’ Human Machine Interface Technical Specialist, said technology like Ultrahaptics’ was still 5-7 years away from being in their cars.

And Hoshi, whose Pixie Dust has made promotional videos of people touching tiny mid-air sylphs, says the cost of components needs to fall further to make this technology commercially viable. ‘Our task for now is to tell the world about this technology,’ he says.

Pixie Dust is in the meantime also using ultrasound to form particles into mid-air shapes, so-called acoustic levitation, and speakers that direct sound to some people in a space and not others – useful in museums or at road crossings, says Hoshi.

FROM KITCHEN TO CAR

But the holy grail remains a mid-air interface that combines touch and visuals.

Hoshi says touching his laser plasma sylphs feels like a tiny explosion on the fingertips, and would best be replaced by a more natural ultrasound technology.

And even laser technology itself is a work in progress.

Another Japanese company, Burton Inc, offers live outdoor demonstrations of mid-air laser displays fluttering like fireflies. But founder Hidei Kimura says he’s still trying to interest local governments in using it to project signs that float in the sky alongside the country’s usual loudspeaker alerts during a natural disaster.

Perhaps the biggest obstacle to commercializing mid-air interfaces is making a pitch that appeals not just to consumers’ fantasies but to the customer’s bottom line.

Norwegian start-up Elliptic Labs, for example, says the world’s biggest smartphone and appliance manufacturers are interested in its mid-air gesture interface because it requires no special chip and removes the need for a phone’s optical sensor.

Elliptic CEO Laila Danielsen says her ultrasound technology uses existing microphones and speakers, allowing users to take a selfie, say, by waving at the screen.

Gesture interfaces, she concedes, are nothing new. Samsung Electronics had infra-red gesture sensors in its phones, but says ‘people didn’t use it’.

Danielsen says her technology is better because it’s cheaper and broadens the field in which users can control their devices. Next stop, she says, is including touchless gestures into the kitchen, or cars.

(Reporting by Jeremy Wagstaff; Editing by Ian Geoghegan)

Cook: 3D Touch a Game Changer

I think 3D Touch is the most important thing that Apple has done for a while, and I think as with all such things we don’t really see it until later. Cook seems to agree: 20 Minutes With Tim Cook – BuzzFeed News:

“But he’s most excited by 3D Touch. ‘I personally think 3D Touch is a game changer,’ he says. ‘I find that my efficiency is way up with 3D touch, because I can go through so many emails so quickly. It really does cut out a number of navigational steps to get where you’re going.’ Even with just a quick demo, it’s easy to see his point. It’s a major new interface feature, one that threatens to upend the way we navigate through our phones, especially once third-party developers begin implementing it in their applications. Apple has engineered the hell out of this 3D Touch to ensure they’ll do just that.

For Cook, 3D Touch is a tentpole feature of not just the iPhone 6s series, but of the iPhone itself and one that shows the company isn’t saving marquee innovations for those ‘tick’ years. ‘As soon as products are ready we’re going to release them,’ Cook explains. ‘There’s no holding back. We’re not going to look at something and say ‘let’s let’s keep that one for next time.’ We’d rather ship everything we’ve got, and put pressure on ourselves to do something even greater next time.’”

BBC: Game of Drones

Here’s the BBC World Service version of my Reuters piece on drones from a few months back. Transcript below:

America may still be the tech centre of the world — and it is — but regulatory dithering over whether and how to allow drones — or unmanned aerial vehicles as most call them — in its airspace is throwing up opportunities for other countries to get a head-start.

And that’s no small thing, for a couple of reasons. One is that drones as an industry is moving amazingly quickly. Some liken it to the PC: the technology is getting better, smaller, cheaper, and prices are falling so rapidly that everyone can have one, and the gap between what constitutes a serious drone and a toy has narrowed considerably.

There’s another element in this, and it’s also comparable to the PC era. Back then we knew we all wanted a PC but we weren’t quite sure what we wanted it for. We bought one anyway, and felt slightly guilty that it sat in the corner gathering dust. Naysayers questioned the future of an industry that seemed to revolve around convincing people to buy something even when they couldn’t give them a reason to do so.

Sound familiar? A lot of folk, including my self, have bought a drone in the past year. Mine was a tiny one and upon its maiden flight floated high into the air and disappeared into next door’s garden. Its second landed in a gutter that could only be reached by small children and my wife drew the line at sending our daughter up there. So I’m now drone-less.

This is the bigger issue with drones — not whether to propel reluctant tikes up ladders, but to figure out what they’re good for. And this is where companies in Europe and Asia are stealing a march on their U.S. cousins. The hardware is all well and good but the future of drones, like that of computers, is going to be about harnessing their unique capabilities to solving problems, developing use cases, building ecosystems (sorry, I’m obliged by contract to use that word at least once a week) .

So, for example, a company here in Singapore is working with companies and government agencies around the region on a range of interesting things — what they and others are calling drones as a service. So if you’re flying over a palm oil plantation in Malaysia doing something quite basic like mapping where, exactly, the edges of the property are, why not calibrate your cameras so they can also measure moisture level — and likely yield — of individual trees?

And rather than have building engineers hang dangerously out of skyscrapers to check structural damage, why not have a drone do it? Not only do you save on safety, you also have a virtual model of your building you can refer back to. Tired of despatching dog catchers in response to citizens’ complaints? Deploy a drone above the target areas and build a heat map of their movements so you know when best to pounce, and how many leads you’re going to need.

There’s lots of other opportunities being explored out there beyond the obvious ones. The trick is going to build business models around theses services so when companies see drones they don’t think ‘toy I play with at the weekend’ but ‘this could really help me do something I’ve long thought impossible’.

No question, of course, that the U.S. will be the centre of drone innovation. It already is, if you think in terms of developing the technologies and absorbing venture capital. But it may yet be companies beyond American shores which make the most of their head-start that emerge into major players as drones become as commonplace in business, if not homes, as computers are.

BBC: Cars we can’t drive

Let’s face it: we’re not about to have driverless cars in our driveway any time soon. Soonest: a decade. Latest: a lot longer, according to the folk I’ve spoken to.

But in some ways, if you’ve got the dosh, you can already take your foot off the gas and hands off the steering wheel. Higher end cars have what are called active safety features, such as warning you if you stray out of your lane, or if you’re about to fall asleep, or which let the car take over the driving if you’re in heavy, slow moving traffic. Admittedly these are just glimpses of what could happen, and take the onus off you for a few seconds, but they’re there. Already.

The thinking behind all this: More than 90% (roughly, depends who you talk to) of all accidents are caused by human error. So, the more we have the car driving, the fewer the accidents. And there is data that appears to support that. The US-based Insurance Institute for Highway Safety found that forward collision warning systems led to a 7% reduction in collisions between vehicles.

But that’s not quite the whole story. For one thing, performing these feats isn’t easy. Getting a car, for example, to recognise a wandering pedestrian is one of the thorniest problems that a scientist working in computer vision could tackle, because you and I may look very different — unlike, say, another car, or a lamppost, or a traffic sign. We’re tall, short, fat, thin, we were odd clothes and we are unpredictable — just because we’re walking towards the kerb at a rate of knots, does that mean we’re about to walk in to the road?

Get this kind of thing wrong and you might have a top of the range Mercedes Benz slam on the brakes for nothing. The driver might forgive the car’s computer the first time, but not the second. And indeed, this is a problem for existing safety features — is that a beep to warn you when you’re reversing too close to an object, or you haven’t put your seatbelt on, or you’re running low on windscreen fluid, or bceause you’re straying into oncoming traffic? We quickly filter out warning noises and flashing lights, as airplane designers have found to their (and their pilots’) cost.

Indeed, there’s a school of thought that says that we’re making a mistake by even partially automating this kind of thing. For one thing, we need to know what exactly is going on: are we counting on our car to warn us about things that might happen, and, in the words of the tech industry “mitigate for us”? Or are these interventions just things that might happen some of the time, if we’re lucky, but not something we can rely on?

If so, what exactly is the point of that? What would be the point of an airbag that can’t be counted on to deploy, or seatbelts that only work some of the time? And then there’s the bigger, philosophical issue: for those people learning to drive for the first time, what are these cars telling them: that they don’t have to worry too much about sticking to lanes, because the car will do it for you? And what happens when they find themselves behind the wheel of a car that doesn’t have those features?

Maybe it’s a good thing we’re seeing these automated features now — because it gives us a chance to explore these issues before the Google car starts driving itself down our street and we start living in a world, not just of driverless cars, but of cars that people don’t know how to drive.

This is a piece I wrote for the BBC World Service, based on a Reuters story.

From balloons to shrimp-filled shallows, the future is wireless

From balloons to shrimp-filled shallows, the future is wireless

BY JEREMY WAGSTAFF

(Reuters) – The Internet may feel like it’s everywhere, but large pockets of sky, swathes of land and most of the oceans are still beyond a signal’s reach.

Three decades after the first cellphone went on sale – the $4,000 Motorola DynaTAC 8000X “Brick” – half the world remains unconnected. For some it costs too much, but up to a fifth of the population, or some 1.4 billion people, live where “the basic network infrastructure has yet to be built,” according to a Facebook white paper last month.

Even these figures, says Kurtis Heimerl, whose Berkeley-based start-up Endaga has helped build one of the world’s smallest telecoms networks in an eastern Indonesian village, ignore the many people who have a cellphone but have to travel hours to make a call or send a message. “Everyone in our community has a phone and a SIM card,” he says. “But they’re not covered.”

Heimerl reckons up to 2 billion people live most of their lives without easy access to cellular coverage. “It’s not getting better at the dramatic rate you think.”

The challenge is to find a way to connect those people, at an attractive cost.
And then there’s the frontier beyond that: the oceans.

Improving the range and speed of communications beneath the seas that cover more than two-thirds of the planet is a must for environmental monitoring – climate recording, pollution control, predicting natural disasters like tsunami, monitoring oil and gas fields, and protecting harbours.

There is also interest from oceanographers looking to map the sea bed, marine biologists, deep-sea archaeologists and those hunting for natural resources, or even searching for lost vessels or aircraft. Canadian miner Nautilus Minerals Inc said last week it came to an agreement with Papua New Guinea, allowing it to start work on the world’s first undersea metal mining project, digging for copper, gold and silver 1,500 metres (4,921 feet) beneath the Bismark Sea.

And there’s politics: China recently joined other major powers in deep-sea exploration, partly driven by a need to exploit oil, gas and mineral reserves. This year, Beijing plans to sink a 6-person ‘workstation’ to the sea bed, a potential precursor to a deep-sea ‘space station’ which, researchers say, could be inhabited.

“Our ability to communicate in water is limited,” says Jay Nagarajan, whose Singapore start-up Subnero builds underwater modems. “It’s a blue ocean space – if you’ll forgive the expression.”

BALLOONS, DRONES, SATELLITES
Back on land, the challenge is being taken up by a range of players – from high-minded academics wanting to help lift rural populations out of poverty to internet giants keen to add them to their social networks.

Google, for example, is buying Titan Aerospace, a maker of drones that can stay airborne for years, while Facebook has bought UK-based drone maker Ascenta.

CEO Mark Zuckerburg has said Facebook is working on drones and satellites to help bring the Internet to the nearly two thirds of the world that doesn’t yet have it. As part of its Project Loon, Google last year launched a balloon 20 km (12.4 miles) into the skies above New Zealand, providing wireless speeds of up to 3G quality to an area twice the size of New York City.

But these are experimental technologies, unlikely to be commercially viable for a decade, says Christian Patouraux, CEO of another Singapore start-up, Kacific. Its solution is a satellite network that aims to bring affordable internet to 40 million people in the so-called ‘Blue Continent’ – from eastern Indonesia to the Pacific islands.

A mix of technologies will prevail, says Patouraux – from fiber optic cables, 3G and LTE mobile technologies to satellites like his HTS Ku-band, which he hopes to launch by end-2016. “No single technology will ever solve everything,” he said.

Indeed, satellite technology – the main method of connectivity until submarine cables became faster and cheaper – is enjoying a comeback. While Kacific, O3b and others aim at hard-to-reach markets, satellite internet is having success even in some developed markets. Last year, ViaSat topped a benchmarking study of broadband speeds by the U.S. Federal Communications Commission.

And today’s airline passengers increasingly expect to be able to go online while flying, with around 40 percent of U.S. jetliners now offering some Wi-Fi. The number of commercial planes worldwide with wireless internet or cellphone service, or both, will triple in the next decade, says research firm IHS.

WHITE SPACE

Densely populated Singapore is experimenting with so-called ‘white space’, using those parts of the wireless spectrum previously set aside for television signals. This year, it has quietly started offering what it calls SuperWifi to deliver wireless signals over 5 km or more to beaches and tourist spots.

This is not just a first-world solution. Endaga”s Heimerl is working with co-founder Shaddi Hasan to use parts of the GSM spectrum to build his village-level telco in the hills of Papua.

That means an ordinary GSM cellphone can connect without any tweaks or hardware. Users can phone anyone on the same network and send SMS messages to the outside world through a deal with a Swedish operator.

Such communities, says Heimerl, will have to come up with such solutions because major telecoms firms just aren’t interested. “The problem is that these communities are small,” says Heimerl, “and even with the price of hardware falling the carriers would rather install 4G in cities than equipment in these communities.”

The notion of breaking free of telecoms companies isn’t just a pipe dream.

MESH

Part of the answer lies in mesh networks, where devices themselves serve as nodes connecting users – not unlike a trucker’s CB radio, says Paul Gardner-Stephen, Rural, Remote & Humanitarian Telecommunications Fellow at Flinders University in South Australia.

Gardner-Stephen has developed a mesh technology called Serval that has been used by activists lobbying against the demolition of slums in Nigeria, and is being tested by the New Zealand Red Cross.

Mesh networks aren’t necessarily small, rural and poor: Athens, Berlin and Vienna have them, too. And Google Chairman Eric Schmidt has called them “the most essential form of digital communication and the cheapest to deploy.”

Even without a balloon and Google’s heft, mesh networks offer a bright future, says Gardner-Stephen. If handset makers were to open up their chips to tweaks so their radios could communicate over long distances, it would be possible to relay messages more than a kilometre.

In any case, he says, the Internet is no longer about instantaneous communication. As long as we know our data will arrive at some point, the possibilities open up to thinking of our devices more as data couriers, storing messages on behalf of one community until they are carried by a villager to another node they can connect to, passing those messages on several times a day.

It’s not our present vision of a network where messages are transmitted in an instant, but more like a digital postal service, which might well be enough for some.

“Is the Internet going to be what it looks like today? The answer is no,” said Gardner-Stephen.

PISTOL SHRIMPS

As the Internet changes, so will its boundaries.

As more devices communicate with other devices – Cisco Systems Inc estimates there will be 2 billion such connections by 2018 – so is interest increasing in connecting those harder-to-reach devices, including those underwater, that are beyond the reach of satellites, balloons and base stations.

Using the same overground wireless methods for underwater communications isn’t possible, because light travels badly in water. Although technologies have improved greatly in recent years, underwater modems still rely on acoustic technologies that limit speeds to a fraction of what we’re now used to.

That’s partly because there are no agreed standards, says Subnero’s Nagarajan, who likens it to the early days of the Internet. Subnero offers underwater modems that look like small torpedoes which, he says, can incorporate competing standards and allow users to configure them.

This is a significant plus, says Mandar Chitre, an academic from the National University of Singapore, who said that off-the-shelf modems don’t work in the region’s shallow waters.

The problem: a crackling noise that sailors have variously attributed to rolling pebbles, surf, volcanoes, and, according to a U.S. submarine commander off Indonesia in 1942, the Japanese navy dropping some “newfangled gadget” into the water.

The actual culprit has since been identified – the so-called pistol shrimp, whose oversized claw snaps a bubble of hot air at its prey. Only recently has Chitre been able to filter out the shrimp’s noise from the sonic pulses an underwater modem sends. His technology is now licensed to Subnero.

There are still problems speeding up transmission and filtering out noise, he says. But the world is opening up to the idea that to understand the ocean means deploying permanent sensors and modems to communicate their data to shore.

And laying submarine cables would cost too much.

“The only way to do this is if you have communications technology. You can’t be wiring the whole ocean,” he told Reuters. “It’s got to be wireless.”

(Editing by Ian Geoghegan)

Behind the iPad’s sluggish sales

Sameer Singh offers some possible reasons for the fall in iPad sales: 

Pocketable vs. Tablet Computing | Tech-Thoughts by Sameer Singh: “With this background, the sudden decline in iPad sales may have been caused by a combination of the following factors:

  • Most high-end consumers who need iPads already own them (and as some analysts have pointed out, replacement cycles are long) 
  • Large screen smartphones have made media tablets somewhat redundant, i.e. the iPad is no longer a ‘necessary’ purchase for ‘phablet’ owners 
  • The iPad is priced out of the market segment that still finds media tablets ‘necessary’ 
  • Upmarket movement is limited because tablet use cases still haven’t evolved to cannibalize more productivity-related computing tasks (I may have overestimated the pace at which this would occur)”

To which I’d add: 

The iPad is in some ways closer to a PC than a phone in its utility vs luxury ratio. People upgrade their phones because they’re visible accessories, something that says something about the person holding it. Computers have barely hit that bar, and maybe iPads — especially since users usually cloak them in a stand/cover — don’t quite make it either. So unless there’s a really compelling performance/spec reason to upgrade, most don’t bother.

I’ve not seen data on this, but anecdotally most people I know get an iPad and then settle, rather than upgrading when the next one comes out. Of course the lack of telco subsidy for most iPad purchases adds to this. 

It’s not that iPad isn’t a great idea, but it turns out that the smarter move in a way has been to increase the size of the phone (phablet) rather than shrink the size of the computer (the iPad), at least in terms of getting people to upgrade. 

Smartwatches: Coming Soon to a Cosmos Near You

This is a column I did for the BBC World Service, broadcast this week. 

There’s been a lot of talk that the big boys — by which I mean Apple and Samsung — are about to launch so-called smart watches. But how smart does a watch have to be before we start strapping them to our wrists in numbers to make a difference?

First off, a confession. I’ve strapped a few things to my wrist in my time. Back in the 80s and 90s I used to love the Casio calculator watch called the Databank, though I can’t actually recall ever doing a calculation on it or putting more than a few phone numbers in there. About a decade ago I reviewed something called the Fossil Wrist PDA, a wrist-bound personal digital assistant. It didn’t take off. In fact, no smart watch has taken off.

So if the smartwatch isn’t new, maybe the world around them is? We’ve moved a long way in the past couple of years, to the point where every device we have occupies a slightly different spot to the one it was intended for. Our phones, for example, are not phones anymore but data devices. And even that has evolved: the devices have changed direction in size, from shrinking to getting larger, as we realise we want to do more on them.

That in turn has made tablets shrink. When Apple introduced the iPad Steve Jobs famously said that was the smallest the tablet could reasonably go, but Samsung proved him wrong with the phablet, and now we have an iPad Mini. All this has has raised serious questions about the future of the laptop computer and the desktop PC.

But it shouldn’t. For a long time we thought that the perfect device would be something that does everything, but the drive to miniaturise components has actually had the opposite effect: we seem to be quite comfortable moving between devices and carrying a bunch of them around with us.

This all makes sense, given that our data is all stored in the cloud, and every device is connected to it either through WiFi, a phone connection or Bluetooth. We often don’t even know how our device is connecting — we just know it is.

So, the smartwatch optimists say, the time is ripe for a smartwatch. Firstly, we’ve demonstrated that we are able to throw out tired conventions about what a device should do. If our phone isn’t really our phone anymore then why not put our phone on our wrist? Secondly, the cloud solves the annoying problem of getting data in and out of the device.

Then there’s the issue of how we interact with it. It’s clear from the chequered history of the smartwatch that using our digits is not really going to work. We might be able to swipe or touch to silence an alarm or take a call, but we’re not going to be tapping out messages on a screen that size.

So it’s going to have to be voice. GeneratorResearch, a research company, reckons this would involve a small earpiece and decent voice-command software like Apple’s Siri. I’m not convinced we’re quite there yet, but I agree with them that it’s going to take someone of Apple’s heft to make it happen and seed the market.

In short, the smart watch might take off if it fits neatly and imaginatively into a sort of cosmos of devices we’re building around ourselves, where each one performs a few specific functions and overlaps with others on some. If it works out, the watch could act as a sort of central repository of all the things we need to know about — incoming messages, appointments, as well as things the cloud thinks we should know about, based on where we are: rain, traffic jams, delayed flights.

But more crucially it could become something that really exploits the frustratingly unrealised potential of voice: where we could more easily, and less self-consciously, talk to our devices and others without having to hold things to our ear, or be misunderstood.

In time, the smartwatch may replace the smartphone entirely.

I’m not completely convinced we’re as close as some think we are, but I’ve said that before and been proved wrong, so who knows?