Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Digital identity has three layers, and you can only protect one of them (qz.com)
103 points by longdefeat on Feb 2, 2019 | hide | past | favorite | 31 comments


The main issue I see is how they plan to fix this issue:

> Instead of guessing our location, relationships, or hidden desires behind our backs, they could ask questions and respect our answers.

You __really__ cannot respect what someone tells you about themselves. Machines can rely on the information people feed into social media because that's things the person is personally OK with sharing to anyone and everyone (or, at least, their friend group), but once they start asking specifically about relationships, (bad) habits, and embarrassing situations ("were you at <a bar> yesterday at 2 AM?"), people often lie to the machine to make them seem like a better person or boost their own morale.


Yes, but that's okay. Why should humans not be able to lie to machines? We value privacy because there are some things that are not other people's (or other machine's) buisiness.

In the end it matters why you share the data. If you share the data with a service because you have an addiction problem and the service will help you to overcome it, then it is reasonable to be more open and personal. But if the same data is also given out to others to judge your credit-worthyness, then we have a breach of trust and that is rightfully not acceptable.

If you share your data on facebook (or any more trustworthy plattform) then your objectives are of a different kind: You want to share aspects of your social life. This is a lot less private than the addiction service and also a lot less critical when it comes to the correctness of the information.


It’s not really lying to machines, though; the machines don’t care. It’s lying to the people on the other end of the machine.

Suppose somebody working on Hacker News wants to redesign it, and wants to know if they should optimize for long visits or short visits. If they call me on the phone and ask me how often I open HN and what my mean usage duration is, I’m just going to make up numbers. Their website analytics, on the other hand, can tell them something closer to reality.

I’m not saying that’s good or bad; I’m just saying that our devices are often more reliable records of ourselves than our own memories and thoughts.


This gets into very deep water about what the "real" self it. And using this in the machine feedback loop produces weird side effects, like the tendency of Youtube to recommend far-right, conspiracy, anti-vax etc videos because those have high engagement. It's almost a dissociative effect, where people keep watching or clicking but understand less and less why they're doing it.

People who wire a machine directly into the id have obviously never seen or understood Forbidden Planet.


It gets into the value of the humanities. Not the departments at universities but the course of study, which must be concerned with the functional sum of sensory systems.

Since Plato, the humanities has served critical purposes beyond the production of immersive experience. But the two are no more separable than science is from logic.

The value of the humanities, in my view, is in beginning with the sum of the effects and critiquing the mechanics from there. I am afraid this necessitstes something like the opposite of a technocracy, with no lack of sincerity and brevity. Hence it’s historical conguencce with enlightenment, whatever that is ...


> Not the departments at universities but the course of study

Why are you making this distinction? I think most people undertaking that course of study will be doing at the department in the university.


>> Not the departments at universities but the course of study

> Why are you making this distinction?

Probably because there's a pretty deeply held belief among a lot of technologists that non-STEM degrees and departments are at best wastes of time, and at worst utterly corrupt.


I've observed the very Youtube behavior you describe, on the newly setup computer on the newly setup location, where I haven't logged in with my Google account. It's really horrible.


> I’m not saying that’s good or bad; I’m just saying that our devices are often more reliable records of ourselves than our own memories and thoughts.

True. But I am still running a scriptblocker and other privacy mechanism to make the analytics fail.


> ...but once they start asking specifically about relationships [etc] people often lie to the machine to make them seem like a better person or boost their own morale.

People do this on social media anyway. Because it's pseudo-public, social media is mostly performative. You post your staged selfies, victories, and happy memories and don't post the rest. The only place you'll find the kind of honesty you espouse is private chat and analyzing chat for "preferences" is like snooping on people's private conversations - ultra creepy.


I see another aspect to this as well. Our incentives are fundamentally opposed to the incentives of those who are collecting data on us. So not only are there reasons for them not to believe what we tell them about ourselves, even if they were to ask, there are also reasons for them to figure out better ways to use the information against us if they were to ask and we were to answer.

I don't think there's a good solution to this as long as so many services that people want to use are monolithic and centralized.


Just wait until they have gaze tracking... they'll have real time interest/boredom (image order, time spent, ignored) detection along with an almost unmaskable biometric probability match (sequential novelty boredom detection on e.g cat pictures).

You don't have to click for them to know you like dark long haired angora goats... in compromising positions.

Hmmm are you jealous of that perfectly cabled rack next to him? Maybe we could interest you in... or we could just let your your mom know, if you don't pay for protection... she's not always so understanding these days.


Gaze tracking isn’t necessary for this. You can track how long a user stopped scrolling through a feed on items classified as X, then build confidence by comparing the frequency of stops on X classified stories over time.

Also, for really weird stuff, it’s probably enough to blackmail a person just by showing that Weird Thing appears in their social feed in the first place.


I pay a lot of attention to what goes in my profile. My wife and I are older so several of our friends have serious health issues. We are careful to use Tor or private browsing with DuckDuckGo when doing medical related web searches. I also switched over to G Suite for $10/month in the hope that I am not being ‘harvested’ as much as when I used to use Google’s services for free. I cringe when I see the private data young relatives leak on FB.

EDIT: and, of course, keep location tracking turned off.


> We are careful to use Tor or private browsing with DuckDuckGo when doing medical related web searches.

I shutter to think what conclusions the AI overlords draw from my tracking data showing me going to a methadone clinic multiple times a day as part of my work -- probably think I'm the most strung-out human in history.

I try to turn off tracking but we all know how that goes...


If you're already a paying data subject consider ProtonMail with ProtonVPN. For example, ProtonVPN has a built in Tor mode. Works quite well for me - I can recommend.


If you’re really that paranoid why wouldn’t work just go to a library and research the old fashioned way, books.


It's not paranoia when someone's actually out to get you;) And the web is convenient even taking reasonable precautions.


I strongly disagree about the middle layer. We have a lot of possible control about what's observed about our digital behavior.

Location tracking (turn off)

IP address (use VPN)

Cookies (don't allow cookies, don't browse the web while logged in to facebook, google, etc)

Javascript, browser fingerprinting (disable unnecessary javascript)

Installing sketchy apps (which is most of them - don't)

Credit card purchases (use cash instead)


Not enough people recommend the Tor Browser at times like this. It's really good!

>Cookies (don't allow cookies, don't browse the web while logged in to facebook, google, etc)

Even better, try not to have account on those sites at all.

>Javascript, browser fingerprinting (disable unnecessary javascript)

Firefox has a good about:config option privacy.resistFingerprinting

>Installing sketchy apps (which is most of them - don't)

F-Droid is a good repository of trustworthy apps: https://f-droid.org


Thanks for the additions!

I think Tor browser is great. However, it may not help much unless you also follow the above practices -- don't log in to things, don't allow cookies, run minimal javascript.


Brave has Chrome without the tracking and adds optional Tor in private windows, with Brave contributing new relays to the Tor network. Could be helpful if your adversary is ad-tech.


There is one more that most people can do but dont. You can disable the tracking from all those advertising and analytics third party apps by using addins like Disconnect. It stops them from collecting data and retargeting. Most of them are like facebook pixel, once loaded they are in cache even after closing the tab


I'd be losing anywhere from 1-5% cash back not using a credit card.


Sure, I didn't mean to say everyone should do all these things (I don't), just that we have the options under our control.


I think instead of using screensavers when computer is idle, everyone should be running Page Rankster Apps random ad clicking with alternating porn surfing, Just to keep Surveillance Capitalism and A.I. dumbed down like a mushroom feed it shit!


If we make the world a safe space where the goal is to meet everyone's needs, then maybe we can live our lives openly and not have to worry about information being weaponized like this?


I like the principal; I have not yet figured out what to think about laws that concern “immoral” private behaviour such as “extreme” porn (any jurisdiction, the point isn’t any particular law — Japanese, USA, UK, Saudi — but the principle).

Even if all such laws were repealed, there would still be social consequences for violating taboos.


So just end all discrimination and stigma? That's just a little unlikely.

Even then there will be innocent things some would prefer not to share globally, or be used as a basis for profiling.


Admittedly i didn't read all the article, the flashing gifs were giving me a headache. However;

I wonder if this could be manipulated by a bot? Wouldn't be too hard to set up one that posted moral fiber and BS in a timely fashion, actually if you had access to the end data, and could create identities to test, this would be an interesting (maybe deep learning) project.


There are countless bots doing this exact thing, in countless different ways, for countless purposes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: