Evernote Makes Employee Reading of Messages Opt-in

Evernote has been through the wringer with its decision to add machine learning to its repertoire, effectively trying to pave the way to added services based on scanning the contents of users’ notes. Users were not happy, not least because Evernote made it opt-out. The settings looked like this: 

Screenshot 2016 12 15 06 23 04

Evernote has now had a change of heart, rather coyly calling it Evernote Revisits Privacy Policy Change in Response to Feedback: No longer would it implement the planned Privacy Policy changes for January 23.

“Instead, in the coming months we will be revising our existing Privacy Policy to address our customers’ concerns, reinforce that their data remains private by default, and confirm the trust they have placed in Evernote is well founded. In addition, we will make machine learning technologies available to our users, but no employees will be reading note content as part of this process unless users opt in. We will invite Evernote customers to help us build a better product by joining the program.”

It’s probably the best solution in the circumstances, but it was poorly handled, and reflected a lack of understanding, once again, of what the product is. Evernote is simply that: a place where you can store your notes forever. That needs to be paramount. Anything else needs to support that, and not undermine it. 

Users’ reaction was becaues they prized privacy and security above other layers of features and services that may arise from running semantic engines and whatnot over Evernote. And certainly doing it via opt-out, and a privacy policy that raised suspicions.

I personally would love to see more done with my notes — complex search is still poor, finding similar notes is still poor — but I need, and I’m sure I’m not alone — to be confident Evernote isn’t going to do anything weird with my stash without my permission. Especially have employees poring over them. 

Turn off location in iOS, and Uber doesn’t work

(Update: Uber say they are looking into it.) 

Buzzfeed says Privacy Advocates Want Uber To Stop Tracking Users After Rides End but Uber responds that “by offering the option of manually entering pick-up locations, the company is giving users a choice to be tracked or not.”

It quotes Kurt Opsahl, deputy executive director and general counsel at EFF, as saying that this ‘takes away a lot of the usability.’ Part of Uber’s appeal is how easy it is to open the app and let GPS pinpoint your location for a driver. ‘As you’re trying to get picked up by the side of the road, you might not know what address you’re at,’ Opsahl said. ‘I guess you could turn it on and off again…but that’s pretty clunky as well.’”

I’d agree, and have found in my tests that it’s worse than that: turn location off, and the app no longer works. 

First off, here are the options, as described in settings in iOS:

2016 12 06 18 10 15

So it’s either Always or Never. Nothing in between. Turn to Never and things not only get clunky — meaning that you’re prompted by dire warnings every few minutes, but after a day or two you start to get blank screens, like these when you try to book an Uber. 

2016 12 06 18 09 38

2016 12 06 17 10 20

I’ve reached out to Uber for an explanation. 

I’m An Airline, Fly Me

This an email from a bona fide airline: 

Dear Sir/Madam,

Please be informed that your transaction with [international carrier] has been confirmed. Due to fraud prevention procedure against Credit Card transaction, we would like to validate your recent transaction with [international carrier] by filling information below :

Passenger(s) name :
Route :
Date of Travel :
Cardholder name :
Address :

Also, we need to confirm and validate your name and last four digit of your card number. Please kindly provide scanned/image of your front side credit card that used to buy the ticket. You may cover the rest information on the card. Please reply in 8 hours after received this email or we will cancel the reservation.

Thank you for your cooperation.

Best Regards,
Verification Data Management

BBC: The Rise of Disappearables

The transcript of my BBC World Service piece on wearables. Reuters original story here

Forget ‘wearables’, and even ‘hearables’, if you’ve ever heard of them. The next big thing in mobile devices: ‘disappearables’.

Unless it really messes up, Apple is going to do for wearables with the Watch what is has done with the iPod for music players, the phone with its iPhone, the iPad for tablets. But even as Apple piques consumer interest in wrist-worn devices, the pace of innovation and the tumbling cost, and size, of components will make wearables smaller and smaller. So small, some in the industry say, that no one will see them. In five years, wearables like the Watch could be overtaken by hearables – devices with tiny chips and sensors that can fit inside your ear. They, in turn, could be superseded by disappearables – technology tucked inside your clothing, or even inside your body.

This all may sound rather unlikely, until you consider the iPhone is only 8 years old, and see what has happened to the phone since then. Not only do we consider the smartphone a status symbol in the salons of New York, but they’re something billions of people can afford. So it seems highly plausible that the watch as a gizmo is going to seem quaint in 10 years — as quaint as our feature phone, or net book or MP3 player is now.


So how is this all going to play out? Well this year you’ll be able to buy a little earpiece which contains a music player, 4 gigabytes of storage, a microphone to take phone calls – just nod your head to accept – and sensors that monitor your position, heart rate and body temperature.

Soon after that you’ll be able to buy contact lenses that can measure things like glucose levels in tears. Or swallow a chip the size of a grain of sand, powered by stomach juices and transmitting data about your insides via Bluetooth. For now everyone is focused on medical purposes, but there’s no reason that contact lens couldn’t also be beaming stuff back to you in real time — nice if you’re a politician being able to gauge the response to your speech so you can tweak it in real time.

Or you’re on a date and needing feedback on your posture, gait, the quality of your jokes. 

In short, hearables and wearables will become seeables and disappearables. We won’t see these things because they’ll be buried in fabric, on the skin, under the skin and inside the body. We won’t attack someone for wearing Google Glasses  because we won’t know they’re wearing them. 

Usual caveats apply. This isn’t as easy as it looks, and there’ll be lots of slips on the way. But the underlying technologies are there: components are getting smaller, cheaper, so why not throw in a few extra sensors into a device, even if you haven’t activated them, and are not quite sure what they could be used for? 

Secondly, there’s the ethical stuff. As you know, I’m big on this and we probably haven’t thought all this stuff through. Who owns all this data? Is it being crunched properly by people who know what they’re doing? What are bad guys and governments doing in all this, as they’re bound to be doing something? And how can we stop people collecting data on us if we don’t want them to? 

All good questions. But all questions we should be asking now, of the technologies already deployed in our street, in our office, in the shops we frequent, in the apps we use and the websites we visit. It’s not the technology that’s moving too fast; it’s us moving too slow.

Once the technology is too small to see it may be too late to have that conversation.  

Technorati Tags: , , , ,

The Facebook Experiment: Some Collated Views

A few pieces in the Facebook Experiment. I’m still mulling my view. 

Paul Bernal: The Facebook Experiment: the ‘why’ questions…:

 Perhaps Facebook will look a little bad for a little while – but the potential financial benefit from the new stream of advertising revenue, the ability to squeeze more money from a market that looks increasingly saturated and competitive, outweighs that cost.

Based on the past record, they’re quite likely to be right. People will probably complain about this for a while, and then when the hoo-haa dies down, Facebook will still have over a billion users, and new ways to make money from them. Mark Zuckerberg doesn’t mind looking like the bad guy (again) for a little while. Why should he? The money will continue to flow – and whether it impacts upon the privacy and autonomy of the people on Facebook doesn’t matter to Facebook one way or another. It has ever been thus….

(Via Paul Bernal’s Blog)

A contrarian view from Rohan Samarajiva: Confused objections to Facebook emotional contagion research:

I am puzzled by the predominantly negative reaction to the manipulation of Facebook content, in the recent published research article in the mainstream media (MSM), though perhaps less in blogs and such.

It seems to me that MSM’s reaction is hypocritical. They manipulate their content all the time to evoke different emotional responses from their readers/viewers/listeners. The difference is that conducting research on resultant emotional changes on MSM is not as easy as on Facebook. For example, magazines have used different cover images, darkening or lightening faces and so. Their only indicator of success is whether version A sold more than version B. Not very nuanced.

(Via LIRNEasia)

And Ed Felten: Privacy Implications of Social Media Manipulation:

To be clear, I am not concluding that Facebook necessarily learned much of anything about the manipulability of any particular user. Based on what we know I would bet against the experiment having revealed that kind of information about any individual. My point is simpler: experiments that manipulate user experience impact users’ privacy, and that privacy impact needs to be taken into account in evaluating the ethics of such experiments and in determining when users should be informed.

(Via Freedom to Tinker)

And finally from Robin Wilton: Ethical Data Handling and Facebook’s “Emotional Contagion” Study:

Once, in a workshop, while discussing mechanisms for privacy preference expression, I said I would be happier for data subjects to have some means of expressing a preference than none. An older, wiser participant made the following wry remark: “That only brings a benefit if someone is prepared to give weight to their preference. If not… well, ten million times zero is still zero”. And that’s the weight Facebook appears to have given to the legitimate interests of its data subjects.

(Via Internet Society Blog Feed)

True Video Lies

This is a longer version of a piece I recorded for the BBC World Service.

The other day my wife lost her phone out shopping. We narrowed it down to either the supermarket or the taxi. So we took her shopping receipt to the supermarket and asked to see their CCTV to confirm she still had the phone when she left.

To my surprise they admitted us into their control room. Banks of monitors covering nooks, crannies, whole floors, each checkout line. There they let us scroll through the security video—I kind of took over, because the guy didn’t seem to know how to use it—and we quickly found my wife, emptying her trolley at checkout line 17. Behind her was our daughter in her stroller, not being overly patient. It took us an hour but in the end we established what look liked a pretty clear chain of events. She had the bag containing the phone, which she gave to our daughter to distract her at the checkout. One frame shows the bag falling from her hands onto the floor, unnoticed by my wife.

Then, a few seconds later, the bag is mysteriously whisked off the floor by another shopper. I couldn’t believe someone would so quickly swoop. The CCTV records only a frame a second, so it took us some time to narrow it down to a woman wearing black leggings, a white top and a black belt. Another half hour of checks and we got her face as she bought her groceries at another till. No sign of the phone bag by this time, but I was pretty sure we had our man. Well, woman.

Except I’m not sure we did. What I learned in that control room is that video offers a promise of surveillance that doesn’t lie. It seems to tell us a story, to establish a clear chain of events. But the first thing I noticed was when I walked back out into the supermarket, was that how little of the floor it covered, and how narrow each camera’s perspective was.

For the most part we’ve learned that photos don’t always tell the truth. They can be manipulated; they offer only a snapshot, without context. But what about videos? We now expect to see cameraphone footage in our news bulletins, jerky, grainy recordings taken by unseen hands, raw and often without context.

This is not to say videos are not powerful truth tellers. But we tend to see what we want to see. When a policeman pepper sprays protests at the University of California there is outrage, and it does indeed appear to be unwarranted. But when four of the videos are synchronized together a more complex picture emerges. Not only can one see the incident within context, but also one gets a glimpse of a prior exchange, as the officer explains what he is about to do to one protester, who replies, almost eagerly: “You’re shooting us specifically? No that’s fine, that’s fine.”

This is not to condone what happens next, but this exchange is missing from most of the videos. The two videos that contain the full prelude are, of course, longer, and have been watched much fewer times: 12,658 (15 minutes) and 245,226 times (8 minutes) versus 1,346,781 times (1 minute) for the one that does not  (the other video has since been taken down).

I’m not suggesting that the more popular video has been deliberately edited to convey a different impression, but it’s clearly the version of events that most are going to remember.

We tend to believe video more than photos. They seem harder to doctor, harder to hoodwink us, harder to take out of context. But should we?

It’s true that videos are harder to fake. For now. But even unfaked videos might seem to offer a version of the facts that isn’t the whole story. Allegations that former IMF president  Dominique Strauss-Kahn may have been framed during a sexual encounter at a New York Hotel, for example, have recently been buttressed by an extensive investigation published recently in the New York Review of Books. There’s plenty of questions raised by the article, which assembles cellphone records, door key records, as well as hotel CCTV footage.

The last seems particularly damning. A senior member of the hotel staff is seen high-fiving an unidentified man and then performing what seems to be an extensive dance of celebration shortly after the event. This may well be the case, but I’d caution against relying on the CCTV footage. For one thing, if this person was in any way involved, would they not be smart enough to confine their emotions until they’re out of sight of the cameras they may well have installed themselves?

Back to my case: Later that night we got a call that our phone had been recovered. The police, to whom I had handed over all my CCTV evidence, said I was lucky. A woman had handed it in to the mall’s security people. I sent her a text message to thank her. I didn’t have the heart to ask her whether she had been wearing black trousers and white top.

But I did realise that the narrative I’d constructed and persuaded myself was the right one was just that: a story I’d chosen to see.

Carrier IQ’s Opt-Out Data Collection Patent

ZDNet writes here about an Carrier IQ patent that outlines keylogging and ability to target individual devices . Which is interesting. But Carrier IQ owns a dozen patents, including this one, which to me is much more interesting. This patent indicates what Carrier IQ software could do—not what it does—but it is revealing nonetheless:

A communication device and a data server record and collect events and event-related data to create an activity record. A user of the communication device may request that events and related data be recorded and collected using a configuration option on the communication device or through an interaction with the data server. Data are grouped into data sets and uploaded to the data server either automatically or upon user approval. The data server uses the uploaded data to create an activity record which the user may access through a website. The user uploads additional data which are associated with the activity record. In some instances, the data server embeds a link pointing to the additional data in an entry in the activity record corresponding to an event associated with the additional data.

Basically this patent offers a way for a “user”—which could be either the user of the device or the service—to have a record of everything they do:

image

While most of the patent is clearly about a product that would create a ‘lifestream’ for the user—where they can access all the things they’ve done with the device, including photos etc, in one tidy presentation, there’s clearly more to it than that. Buried in the patent are indications that it could do all this without the user asking it to. It’s paragraph 0023 which I think is most interesting:

A user of a mobile device requests that events and event-related data be collected by a data server and data collection begins. Alternately, data collection may be a default setting which is turned off only when the device user requests that data collection not occur. In yet another embodiment, a request from a server can initiate, pause, or stop data collection. The mobile device is configured to record events performed by the mobile device as well as event-related data. Typical events that the mobile device records include making or receiving a phone call; sending or receiving a message, including text, audio, photograph, video, email and multimedia messages; recorded voice data, voice messages, taking a photograph; recording the device’s location; receiving and playing an FM or satellite radio broadcast; connecting to an 802.11 or Bluetooth access point; and using other device applications. The data most often related to an event include at least one of: the time, date and location of an event. However, other event-related data include a filename, a mobile device number (MDN) and a contact name. Commonly, the mobile device records events and provides a time, date and location stamp for each event. The events and event-related data can be recorded in sequence and can be stored on the mobile device.

This seems to suggest that

  • basically all activity on the phone can be logged
  • the software can be turned on by default
  • the software can be turned on and off from the server

All this information would be grouped together and uploaded either with the user’s permission or without it:

[0025] The mobile devices may be configured to store one or more data sets and upload the data sets to the data server. In one embodiment, the data sets are uploaded automatically without user intervention, while in other embodiments the mobile device presents a query to the user beforehand. When the mobile device is ready to upload one or more sessions to the data server, a pop-up screen or dialog may appear and present the user with various options. Three such options include (1) delete session, (2) defer and ask again and (3) upload now. The user interface may present the query every time a session is ready to upload, or the user may be permitted to select multiple sessions for deletion, a later reminder or upload all at once. In another embodiments, the uploading of sessions may occur automatically without user intervention. Uploads may also be configured to occur when the user is less likely to be using the device.

This point—about the option to collect such data without the user’s say-so—is confirmed in [0030]:

Although typically the device and the server do not record, upload and collect data unless the user requests it, in other embodiments the communication device and the server automatically record, upload and collect data until the user affirmatively requests otherwise.

And in [0046]:

In embodiments where participation in the data collection services is the default configuration for a mobile device (e.g., an “opt-out” model), it is not necessary to receive a request from a user prior to recording data.

An ‘opt-out’ model is hard to visualize if this is a product that is a user-centric lifestream.

While patents only tell part of the story, there’s no evidence of any such consumer-facing product on Carrier IQ’s website, so one has to assume these capabilities have been, or could be, wrapped into their carrier-centric services. In that sense, I think there’s plenty of interest in here.

Deconstructing Carrier IQ’s Press Release

I couldn’t find this press release on their website, and it’s a couple of weeks old, but I thought it worth deconstructing anyway. My comments in quotes. The rest is from the release. I don’t pretend to have got anything right here, but these might be the starting points for deeper questions.

Carrier IQ Says Measuring Mobile User Experience Does Matter! – MarketWatch:

MOUNTAIN VIEW, Calif., Nov 16, 2011 (BUSINESS WIRE) — Carrier IQ would like to clarify some recent press on how our product is used and the information that is gathered from smartphones and mobile devices.

Carrier IQ delivers Mobile Intelligence on the performance of mobile devices and networks to assist operators and device manufacturers in delivering high quality products and services to their customers. We do this by counting and measuring operational information in mobile devices — feature phones, smartphones and tablets.

operational information is a very vague term. And it’s clear from this comment that it’s not just smart phones that have the software installed. Feature phones and tablets also have it.

This information is used by our customers as a mission critical tool to improve the quality of the network, understand device issues and ultimately improve the user experience. Our software is embedded by device manufacturers along with other diagnostic tools and software prior to shipment.

It calls it a diagnostic tool, but most people’s understanding of a diagnostic tool is one that runs in diagnostic mode. This doesn’t. It runs all the time–even on WiFi and airplane mode. But this comment also hints that there are other tools and software installed by manufacturers too.

While we look at many aspects of a device’s performance, we are counting and summarizing performance, not recording keystrokes or providing tracking tools.

‘Recording’ keystrokes could be as it looks, or it could be weasel language, given the fact that keystrokes are definitely logged. Logging could be considered different to recording in this context.

The metrics and tools we derive are not designed to deliver such information, nor do we have any intention of developing such tools.

But they clearly do, so is that a bug? Is the word deliver here key, as in not designed to deliver such information to certain parties?

The information gathered by Carrier IQ is done so for the exclusive use of that customer, and Carrier IQ does not sell personal subscriber information to 3rd parties.

This doesn’t really help. Not only was it not really the issue that Carrier IQ was selling the data–it was assumed the carrier would be, if anyone was–and the term personal subscriber information is quite possibly a weasel term, as personal has tended to mean to include the actual subscriber’s name. But we know now that even anonymized data can be mined so it is quickly connected to a specific person.

The information derived from devices is encrypted and secured within our customer’s network or in our audited and customer-approved facilities.

I don’t know enough about this, but I’m guessing these are weasel words too. The key word is within. It seems pretty clear that most if not all of the Carrier IQ data is in plain text, so presumably the encryption and securing is only when that data reaches the customer’s network (i.e. this doesn’t include the external network, but the customer’s own computer network.) It also makes clear that the data, whether encrypted or not, also resides within Carrier IQ’s systems.

Our customers have stringent policies and obligations on data collection and retention. Each customer is different and our technology is customized to their exacting needs and legal requirements.

Except that at  no point was any customer, as far as we know, actually asked whether they approved this data being collected about them. In fact, we don’t even know who those customers are in order to be able to verify this.

Carrier IQ enables a measurable impact on improving the quality and experience of our customer’s mobile networks and devices. Our business model and technology aligns exclusively with this goal.

Don’t get me started on the word ‘experience.’ It covers a multitude of sins and can mean more or less anything. My experience of call dropouts? Yes, sure, fix that. My experience of what services I use, how many times I enter my password, whether I’m buying something in Starbucks or Coffee Bean, how many people are in my address book etc. No. Not what I want you to log.

I think there’s another element at play here. Clearly the device manufacturers have allowed this to happen since the software is installed at the point of manufacturer. A carrier can use the service because whatever device their customer uses, they can be pretty confident that the Carrier IQ software is embedded. So one has to ask what data are being shared between carrier, Carrier IQ and manufacturer? And how does this work?

SOURCE: Carrier IQ

Stuck on Stuxnet

By Jeremy Wagstaff (this is my weekly Loose Wire Service column for newspaper syndication)

We’ve reached one of those moments that I like: When we’ll look back at the time before and wonder how we were so naive about everything. In this case, we’ll think about when we thought computer viruses were just things that messed up, well, computers.

Henceforward, with every mechanical screw-up, every piston that fails, every pump that gives out, any sign of smoke, we’ll be asking ourselves: was that a virus?

I’m talking, of course, about the Stuxnet worm. It’s a piece of computer code–about the size of half an average MP3 file–which many believe is designed to take out Iran’s nuclear program. Some think it may already have done so.

What’s got everyone in a tizzy is that this sort of thing was considered a bit too James Bond to actually be possible. Sure, there are stories. Like the one about how the U.S. infected some software which a Siberian pipeline so it exploded in 1982 and brought down the whole Soviet Union. No-one’s actually sure that this happened–after all, who’s going to hear a pipeline blow up in the middle of Siberia in the early 1980s?–but that hasn’t stopped it becoming one of those stories you know are too good not to be true.

And then there’s the story about how the Saddam Hussein’s phone network was disabled by US commandos in January 1991 armed with a software virus, some night vision goggles and a French dot matrix printer. It’s not necessarily that these things didn’t happen–it’s just that we heard about them so long after the fact that we’re perhaps a little suspicious about why we’re being told them now.

But Stuxnet is happening now. And it seems, if all the security boffins are to be believed, to open up a scary vista of a future when one piece of software can become a laser-guided missile pointed right at the heart of a very, very specific target. Which needn’t be a computer at all, but a piece of heavy machinery. Like, say, a uranium enrichment plant.

Stuxnet is at its heart just like any other computer virus. It runs on Windows. You can infect a computer by one of those USB flash drive thingies, or through a network if it finds a weak password.

But it does a lot more than that. It’s on the look out for machinery to infect—specifically, a Siemens Simatic Step 7 factory system. This system runs a version of Microsoft Windows, and is where the code that runs the programmable logic controllers (PLCs) are put together. Once they’re compiled, these PLCs are uploaded to the computer that controls the machinery. Stuxnet, from what people can figure out, fiddles around with this code within the Siemens computer, tweaking it as it goes to and comes back from the PLC itself.

This is the thing: No one has seen this kind of thing before. Of course, we’ve heard stories. Only last month it was reported that the 2008 crash of a Spanish passenger jet, killing 154 people, may have been caused by a virus.

But this Stuxnet thing seems to be on a whole new level. It seems to be very deliberately targeted at one factory, and would make complex modifications to the system. It uses at least four different weaknesses in Windows to burrow its way inside, and installs its own software drivers—something that shouldn’t happen because drivers are supposed to be certified.

And it’s happening in real time. Computers are infected in Indonesia, India, Iran and now China. Boffins are studying it and may well be studying it for years to come. And it may have already done what it’s supposed to have done; we may never know. One of the key vulnerabilities the Trojan used was first publicized in April 2009 in an obscure Polish hacker’s magazine. The number of operating centrifuges in Iran’s main nuclear enrichment program at Natanz was reduced significantly a few months later; the head of Iran’s Atomic Energy Organization resigned in late June 2009.

All this is guesswork and very smoke and mirrors: Israel, perhaps inevitably, has been blamed by some. After all, it has its own cyber warfare division called Unit 8200, and is known to have been interested, like the U.S., in stopping Iran from developing any nuclear capability. And researchers have found supposed connections inside the code: the word myrtle, for example, which may or may not refer to the Book of Esther, which tells of a Persian plot against the Jews, and the string 19790509, which may or may not be a nod to Habib Elghanian, a Jewish-Iranian businessman who was accused of spying for Israel and was executed in Iran on May 9, 1979.

Frankly, who knows?

The point with all this is that we’re entering unchartered territory. It may all be a storm in a teacup, but it probably isn’t. Behind all this is a team of hackers who not only really know what they’re doing, but know what they want to do. And that is to move computer viruses out of our computers and into machinery. As Sam Curry from security company RSA puts it:

This is, in effect, an IT exploit targeted at a vital system that is not an IT system.

That, if nothing else, is reason enough to look nostalgically back on the days when we didn’t wonder whether the machinery we entrusted ourselves to was infected.

Another Facebook Hole?

(Update: Facebook have confirmed the flaw—although it’s not as serious as it looks—and have fixed it. See comments.)

The complexity of Facebook makes it likely there are holes in its privacy. But this one, if I’m right, seems to suggest that it’s possible to access someone’s private data by a social engineering trick outside Facebook.

Today I received an email invite to join Facebook from someone I’ve never heard of. Weird, firstly, because this was not someone I think I’d have known. Weird, also, because I’m already on Facebook.

image

Just to make sure, I clicked on the link to sign up for Facebook and took the option there to sign in with my existing account.

That took me to my usual Facebook page. No more mention of the dude wanting to be my friend. At no point was I given any option to let this person into my life or not.

So I Googled the guy’s name and, lo and behold, I find I’m already on his list of friends:

image

Slightly freaked out, I went back to my account to see if this person was included in my list of friends. He wasn’t.

In other words, this guy can now see all my account details, and I can’t see his. Moreover, at no point have I accepted anything. All I’ve done is click on a link that said: To sign up for Facebook, follow the link below.

What I guess has happened is what happens if you click on the profile of someone who is not a friend but has sent you a message, or asked you to be a friend. In either case, I believe, that person then gets a week’s access to your profile.

I think this is dumb. But I think it’s dangerous that anyone can email me and, if I then click on a link to check out who they are, I now cede access to my information without being able to block it, or to be able to access his Facebook profile to see what kind of person can now access my data.