Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> "IP Logging: By default, we do not keep permanent IP logs in relation with your use of the Services. However, IP logs may be kept temporarily to combat abuse and fraud, and your IP address may be retained permanently if you are engaged in activities that breach our terms and conditions (spamming, DDoS attacks against our infrastructure, brute force attacks, etc). The legal basis of this processing is our legitimate interest to protect our Services against nefarious activities."

Just how transient do logs need to be to fit this criteria?

Am guessing the 7 years or so we need for some of our specific logs might fit the temporary definition too.....



Nothing is even defined. There is nothing in this policy that obligates the company to do, or restricts the company from doing, anything.

This is the way most tech company "Privacy Policies" are written.

There is a significant difference between a statement such as "We (Company) do not do X" versus a promise such as "Company shall not do X" or a statement such as "We do Y. We may do Z" versus a promise such as "Company shall do Y."

Why not have our own "Policies" as users that we publish for tech companies to read. In them, we could describe what we do and what we do not do, and what we may or may not do. Tech companies could rely on these statements. You can see how silly that sounds. Yet users are expected to read and rely on hundreds of different "privacy policies", collection of non-binding statements like "We take privacy seriously".


Solid points. A report card for various privacy policies would be useful. Seen anything like that?


I wouldnt trust any of them to be honest. Privacy policies really arent worth anything.

I would look for websites that generally do not ask for data. Then send them the minimum data you can get away with. I would avoid "signing in" or "signing up" to any website. There is an immense amount of data and information available from free from the web, no "account" is necessary.

For example I dont send any extra HTTP headers like Cookies or User-Agent, I dont use Javascript. I dont request images, CSS. I dont automatically follow links in src tags. Yet I can still read and comment on HN and I can read every website posted to HN. Thats a lot of websites. I can read them just fine while not sending them any more data than is needed. Because I do not use a large, complex graphical browser sponsored by an online ad-supported vendor to make HTTP requests, I can easily control what I send. This is far better IMO than sending unknown amounts of data (letting the websites control what the browser sends via headers, Javascript and src tags) and then hoping the websites dont do things with the data that we dont like.

Privacy policies do not limit websites from collecting data nor do they limit how the data can be used. They are "policies" not agreements. If a website operator does things behind the scenes that violate privacy but that it does not disclose in a "privacy policy" what can a user do. How would the user even know. Or the website could clearly violate their own "privacy policy", but no one outside the website's operators would know. Even if users discover the violation, what would be the repurcussions. Its too late, because a violation means privacy has been blown.

Show me a case where a tech company got sued for violating a privacy policy. How can anyone prove a violation if the operations of the website are not open for public inspection.


The GDPR is meant to address this problem, check out https://www.enforcementtracker.com/ to see what the most recent rulings are.

Your observations are correct, the regulation is able to deal with the damage after it was done, and apply some form of punishment after the fact. It takes whistleblowers and activists to reveal some of the wrongdoings, otherwise the violations remain unknown to us.

Therefore there has to be a greater push towards proactive measures - letting people vote with their wallet by making informed choices; the prerequisite for that is for the relevant information to be available to them.

Have a look at some of the research I've been involved in, we're trying to solve this exact problem: http://privacy-facts.eu/


Wow neat how do you browse the web, Lynx?


Generally I use custom utilities for HTTP generation, URL extraction, chunked transfer decoding, URL encoding/decoding, GZIP/ZIP/PDF/MP4 extraction, etc. Thus I can use any TCP client I want to make HTTP requests from the command line. I do not need a browser to request content. Nor do I need projects like curl or projects that use libcurl like youtube-dl. For large downloads I use tnftp. The shell script I use to download YouTube videos is 424 bytes.

For reading HTML I prefer links. It has the best rendering of HTML tables, IMO, and is for me the easiest source code to work with. I did use lynx back in the late 90's but would never go back to it. Its bloated. Its slow. Im not sure why anyone interested in text-only browsers would use it other than they are unaware of or have not tried alternatives.


Kind of limited since it's only evaluating websites, but https://tosdr.org/ attempts to implement this as a plugin.

Edit: And amusingly it only gives protonmail.com a B.


There is a project for bringing relevant privacy facts closer to users, so they can make informed decisions before choosing to buy a specific product or service. Check out the second screenshot on this page: http://privacy-facts.eu/features/

The idea is to express everything in unambiguous terms, so there's no way to weasel out of it with "yeah, but not by default" or, "well, it depends", etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: