I'm sure your point utterly convinced the clerk being paid minimum wage at your ice cream shop of the errors of his employer.
Would you have accepted it if the clerk explained to you that requiring a Facebook account is the most hassle-free way that ice cream shop has found, considering a) their technical expertise, b) the amount of time and money they need to invest and c) their average customer?
Well, said ice cream shop had a fully open wifi before going through the hassle of creating the captive portal by themselves two years ago.
The thing that really irks me is the assumption that nowadays everyone has facebook. Isn't it surprising that in a country that finds national IDs liberticide the problem of identifying people is being outsourced to a corporate monopoly?
> The thing that really irks me is the assumption that nowadays everyone has facebook
With over a billion users, you'll have that. My POV is... look, you get to be as contrarian as you want. Feel free to never jump on the FB band wagon. But understand you're in the minority. Facebook is that big, and is that established in our culture. So, feel free, choose your own path, but then stop whining about the hassle of being in the minority. Nobody cares about your privacy concerns.
My POV is... look, you get to be as contrarian as you want. Feel free to never jump on the Dictatorship band wagon. But understand you're in the minority. The Dictator is that big, and is that established in our culture. So, feel free, choose your own path, but then stop whining about the hassle of being in the minority. Nobody cares about your freedom concerns.
Like a drug, nobody knows yet the long term effects of Facebook. So when someone acts super paranoid about the effects of something yet unproven, other people come out of the woodwork and try to give the guy some perspective.
It's kind of the same negative attitudes we see towards GMO's, drones or RF radiation.
I mean people could easily flip their perspective and, instead, see these tools as very powerful and, if used correctly, could have massive positive impact on humanity. They can then educate themselves about these tools and figure out how to be the very ones to incur that massive positive impact on humanity using these tools.
Your point is? That we should not argue ("whine") because we might be wrong?
Also, how come you only list other negative attitudes to things that are either still controversial or scientifically validated as probably harmless? How about asbestos, PCB, lots of CO2 in the atmosphere, the Stasi?
Also, are you arguing that we only ever should take action when the end results are in, never because of predictions, because those might be wrong? Or are you saying that all degrees of uncertainty are the same unless zero, and essentially equivalent to total uncertainty?
Seriously, I don't get what you are actually trying to tell me.
Im saying its just a social network. Its not going to give you csncer or arrest you in the night or cause massive floods.
And the negative predictions related to privacy about Facdbook are largely conspiracy theories. We should take action when something seems plausible and in line with the most accurate, unbiased information out there.
I know you're making a general statement about uncertainty but I hadnt mentionsd it. The only uncertainty in my mind about Facebook is how it will shape our culture's perspective on privacy in the future. Some are paranoid, others seem to be sharing more and more of their personal lives with Facebook. I dont know how that will change.
So, it can have a massive impact, but it's "just a social network"? How do you know the massive impact is going to be positive, and otherwise it's "just a social network"?
North Korea is also a social network, BTW, and it doesn't give you cancer either, nor does it cause massive floods. Yeah, somewhat unfair way of framing it, isn't it? But then so is bringing up (somewhat) unpreventable illnesses and natural disasters, don't you think? And it's not like social structures weren't responsible for some of the worst things that happened in human history, dwarfing by far any flood.
Would you mind sharing some of the conspiracy theories? Most of the arguments I know are based around the abuse potential of large data collections and surveillance systems and the tendency of some people to abuse power (both of which have plenty of historical examples - after all, that's part of why democracy and the rule of law and separation of powers and all that was invented), but I can't seem to remember any conspiracy theories.
I agree fully that we should take action when something seems plausible and in line with the most accurate, unbiased information out there - I might want to add though that the level of confidence required should be weighted by the expected damage if something goes wrong, the larger the expected damage, the more cautious we should be (and analogously for the expected benefits, of course).
Also, in addition to some paranoid people, there are people who are concerned because of well-informed and well thought out arguments. I for one am very concerned indeed. That does not mean in any way that I can't appreciate some of the benefits (it seems to make electronic communication for the common user very easy, it seems, for example), but I also see large risks, and I think there are alternatives from the technical perspective that should be able to provide much the same benefits without the risks, which is why I think that society should probably try to get rid of Facebook, at least the way it currently functions.
I don't get that you want free wifi without being prepared to jump through a small hoop. Delivering that service is not free, so requiring you to do a little work for it, or pay (e.g. to verizon who'll happily provide data connectivity), is not unreasonable.
I love how you first complain about dictatorial regimes, then proceed to declare that "society should probably try to get rid of Facebook, at least the way it currently functions".
What was the problem with dictatorships again ? Oh right, stupid edicts from above based on interpretations of individuals. You want to deny a billion people's freedom to play on facebook just because of your ethical/moral concerns. Please note that over a two billion people feel the same way about killing you/me because of premarital sex, why would we respect your moral concerns and not theirs ?
To use your kind of hyperbole : how are you different from the Taliban/dictatorships ?
Erm, it might not have been obvious, but let me point out that I actually didn't threaten to kill anyone if you don't abandon facebook immediately.
I mean, seriously, you can't see the difference between advocating a certain view using supporting arguments and threatening people who don't do as you demand?
People calling for society to get rid of nuclear weapons are essentially the same thing as people who would want to kill anyone who has premarital sex because they both put forward a world view that others don't happen to agree with?
I'll answer the first part of your post once you have explained how to understand the second part.
this was just one of their "partners" but as long as you were logged in with Facebook their "partner" would automatically have access to your information. If you notice its opt-out.
As you don't seem to know what a strawman argument is, let me explain:
A strawman argument is when you misrepresent the position of your opponent in order to have something that is easier to argue against so as to avoid addressing your opponent's arguments. Arguing against a strawman is fallacious because of the misrepresentation: Your arguments only invalidate the strawman position, not the position of your opponent, but you present it as if those were the same.
What you are seeing above is called an analogy: You present an argument that is similar in structure to what your opponent is using, but which explicitly uses different details, in order to make a problem with the argument's structure stand out more clearly.
The important difference is whether there is misrepresentation - just reframing an argument does not make a strawman, as long as you don't attribute the reframed version to your opponent.
edit: To whoever voted this down: Mind to explain why you think explaining to people how to avoid fallaciously accusing others of fallacious reasoning or dishonest arguments is not a good idea?
It is a stawman argument, precisely because it's not a good analogy; instead it's basically an example of Godwin's Law.
The reason why your 'analogy' isn't really such is that setting up a facebook account with just your name, which is all you'd have to do, is nowhere close in severity to willingly following a dictator. It's not analogous.
But first: As I said, the point of an analogy is to make the problem with the structure of an argument stand out more clearly, not to otherwise equate scenarios. The structure of the argument we are dealing with here is "you are in the minority, therefore you should not complain and your concerns are not relevant", the fallaciousness of which becomes a lot more obvious when you replace facebook with a dictator, but the structure being defective does not depend on replacing facebook with a dictator. It's also not an argument about whether or not having to create a facebook account is bad, mind you, it's simply showing that that argument doesn't hold water.
As for godwin's law: It is a common fallacy to believe that any comparison of anything with an authoritarian leader is a fallacy and that the name of that fallacy is "godwin's law". Godwin's law though actually is just a meta-observation that discussions tend to not go very far after a certain historical figure or his ideology has been mentioned, and such comparisons can be perfectly valid arguments, though one is well-advised to be careful with those because the topic tends to have a lot of historical baggage that can make constructing a valid argument difficult.
And on the general theme of "this analogy is fallacious because those things are so different": The LHC is similar to a cathode ray tube in that elementary particles get accelerated in a vacuum using electromagnetic fields. This is not an invalid comparison just because the LHC is so much bigger than the average CRT and is so totally different in almost all details. It's only a fallacy when you conclude that therefore a CRT consumes megawatts of power.
> Isn't it surprising that in a country that finds national IDs liberticide the problem of identifying people is being outsourced to a corporate monopoly?
This was the core of the problem during all the Snowden leaks last year; A fraction of the population recognized how serious it was, but for the vast majority it just didn't compute. This terrifies me and I fear that we may have to see an entire generation before society learns to recognize risks to digital privacy.
But what if the vast majority of the population is right?
What if there are no detrimental long term effects to our society when our digital privacy is compromised? What if the majority of the population ignoring the risks and simply being productive in the areas they work best is the best thing for society?
I'm not arguing this point of view, but I think risks are often be exaggerated by the security conscious.
> But what if the vast majority of the population is right?
Honestly what would be the odds of that?
The track record of the vast majority of the population isn't exactly stellar. We tend to be concerned about whatever some smaller minority tells us to be.
(Note that I say "we". We are not immune: Just look at the amount of strictly incompatible opposing viewpoints on HN. At least half of those must be wrong for each viewpoint, making the aggregate of even this collection of relatively smart people, dumber than a sack of bricks. Okay, maybe two sacks of bricks)
So what then? My point is, it's much better to base your assumptions and viewpoints on the particular merits and flaws of an idea, than on whether or not the majority of the population agrees with it.
It's just not relevant. Not at all. The only relevance might be how to steer the majority public opinion, if you want to affect change. A very wise man once said: THINK FOR YOURSELF, SCHMUCK!
My personal view is that the "security conscious" (which includes myself, to some extent, I guess) is in possession of a lot more facts than the majority of the public. Also their track record is pretty good. Especially since the Snowden revelations, nearly all of the things that used to dismissed to tinfoil territory turned out to be exactly right. Even RMS' "wacky paranoia" turned out to be not so crazy after all.
Heh, even the "tinfoil hat" itself turned out to be useful, in a sense: wrapping your phone in tinfoil prevents you being tracked (and it's easier than removing the batteries). At least this works perfectly for GSM signals (just try calling a phone wrapped in tinfoil), haven't tried with Wifi or Bluetooth.
> My personal view is that the "security conscious" (which includes myself, to some extent, I guess) is in possession of a lot more facts than the majority of the public. Also their track record is pretty good. Especially since the Snowden revelations, nearly all of the things that used to dismissed to tinfoil territory turned out to be exactly right. Even RMS' "wacky paranoia" turned out to be not so crazy after all.
Right, but I'm coming from a pragmatic stance when I put forward the position of the majority of people being "right".
That is, maybe everything the security conscious predict comes to pass. And maybe it has no practical effect on the quality of our lives. That's what I'm suggesting.
Maybe I still go to work, live in the same house, have the same family, and do all the same things I would have otherwise done. Only if I'm security conscious, I feel slightly more worried about it all.
Again, I'm not arguing this personally. Just entertaining the thought.
> That is, maybe everything the security conscious predict comes to pass. And maybe it has no practical effect on the quality of our lives.
I see your point.
Except, I--and the "security conscious" with me--believe that it merely has no practical effect on the quality of our current lives, until it does, and when it does, it's going to pretty horrible and also kinda too late.
That is, when your current surveillance police state suddenly turns into a much worse bad-wrong oppressive surveillance police state that has the habit of, say, arresting innocent people one or two degrees separated from "activists", keeping them in jail for a week or two, only letting them out on the condition they'll inform on whoever they suspect. This can happen in a flash. It's done so many times before in history, all over the world.
So if the security conscious' predictions are also right about this, then it probably pays to heed their warnings.
What you seem to be saying is, maybe the security conscious were right about all those predictions, maybe they are right about new future predictions, EXCEPT the part where they predict the terrible consequences this ultimately will have on the quality of our lives.
I think one has to distinguish between no privacy and privacy controlled by some central entity, though. If essentially everyone knows everything about everyone, that might work (though I have some doubts how one could get there from here, especially as it would need some fundamental changes in our economic system), but having the control concentrated in one place seems to me to be very risky indeed, as that is a huge pile of power in the hands of a few.
If the average person in the population simply does not care if their private details are exposed, transferred, or looked at, then does the person who holds those details hold power over anyone?
Again, I generally think having some privacy is a good thing, and personally do not use my Facebook or other social accounts outside of a professional context. But sometimes I wonder whether most people care at all, and whether that is actually detrimental to society.
Security advocates can often sounds like doomsday prophets, suggesting that the downfall of society begins with private companies amassing personal information.
Yes, it is power, tons of it, in a wide variety of forms.
That someone doesn't care has little effect on how others can use the information - the only power that that removes is the power to embarrass. Any party that you depend on economically can still use the information to their advantage (and your disadvantage). And mind you that that might not only be for irrational reasons - any statistically significant correlation is a perfectly rational reason for some company to refuse you as a customer or to increase prices for you, for example. In securities markets, there even is a name for using private information about planned transactions to gain an advantage, it's called front running, and it's illegal, because it is considered to be essentially stealing the customer's money.
But also, much of the power is not power over individuals, but power over society as a whole, in that such direct access to inter-human communication allows you to find patterns in social dynamics and thus allows you to predict future actions, and what it would need to change the outcome. In essence, that is what marketing is all about - but of course, its applicability is not limited to selling you washing powder, but it can also be used to "sell" political ideas. And in the case of Facebook, they can directly manipulate what people get to see, of course.
And then, there is the intersection between the two, in that there are some people who themselves have more power than others, possibly over you - and if someone gains some power over them directly, that means they might transitively also be gaining power over you.
It would be good to hear about some more concrete examples of this sort of power use (or abuse). Selling political ideas already happens (FOX news), and they don't even need to know your personal details.
I do not like my personal information being collected, I find it tacky and generally only submit details when it's absolutely necessary. But I'm struggling to see the extreme short and long term consequences of a society which submits their private data in this manner.
Your suggestions about charging more for certain customers already happens, on airline ticket websites. But it's not a dire consequence, it's a tacky, classless act by greedy shortsighted people.
Sometimes I feel like the people collecting personal details really don't have any power at all. That personal details are overvalued and simply attract funding for these companies.
Concrete examples are difficult in the same way that concrete examples for the abuses of "non-democratic societies" are difficult. I mean, it's not necessarily difficult to find some, but it's difficult to see the big picture from most examples.
Yes, FOX news already happens, but I would argue it's not a good thing, and making it more effective thus probably is even worse?
And yes, I guess much of the risk in a way are "tacky, classless acts by greedy shortsighted people", but that does not mean that they don't have any real consequences. Corruption is similar - and the effects in some economies are quite devastating, even though the individual bribes might not be that expensive. Big effects can arise from small individual inconveniencees.
But I also think that much of the risk lies in the future, with improved analysis algorithms. I think a reasonable model to assume is one of computers that can think and learn similarly to a human, just with much higher input bandwidth for simple facts and a bit limited free reasoning ability. That assumption may go a bit to far, but I think it's still a much better model than thinking of it as an improved spreadsheet. Look, for example, at google translate - that is in essence a computer learning the translation between languages from humans, without actually being taught anything explicitly. It's no big magic, and yet the results are quite good overall.
And private companies aren't the only ones playing that game, of course, the NSA has a similar power dynamic, and the borders aren't all that clear anyhow, of course, as any data piles that private companies hold tend to also attract intelligence agencies and the like.
But let me try and show some concrete examples of where personal information is or could be used in order to gain power:
In the political arena, I think that gerrymandering is a good example: Parties use known correlations between personal information they know and voting behaviour in order to increase their chances of winning the election (instead of making the election as representative as possible, which would make a functioning democracy).
Or a company could primarily fire people who have predispositions for certain illnesses that could reduce their efficiency later on or they could right from the beginning only hire those who are not affected. If the pool of workers is large enough, that reduces costs. And as a company needs to be competitive, it actually might not even be able to avoid it once competitors start such a practice.
Similarly for insurance companies: From the perspective of the insurance company, their financial goal in a competitive market is to get rid of any customers that will cost them more money than they pay, so whatever data they are able to get their hands on, they probably will try to use for predictions, and as above they will be forced to do so once some competitor does it. From the perspective of society as a whole, though, insurance is particularly important for those expensive cases, as that stabilizes the social structure, while an insurance industry that only insures people who don't need an insuracnce is essentially worthless for society.
Or suppose a totalitarian leader gets elected. No easier way to make sure that noone challenges your power than to rank the social graph of your country by number of edges and putting in jail anyone who is too well-connected. One important historical case of this type was after Nazi Germany had invaded the Netherlands, where they had all the census information on Hollerith punch cards, including a person's religion. That information was collected without any bad intentions in mind, and yet it ultimately was used for easily finding the jews to kill them.
Or remember the case of Daniel Ellsberg? Nixon's people broke into his psychiatrist's office in order to try and steal his file, so they could use information from it to discredit him.
Also, how about the use of cellphone location data for drone strikes against people who have had no chance to defend themselves in a court, what the US government calls "targeted kilings"?
I agree that your examples present a worse case than I had initially thought. Although I wonder if there are positive benefits for society that balance those out (I can't think of any in particular).
So if there is a net overall negative effect for society, how big is the effect? Is it on the scale of a nuclear war, or more along the lines of the anti-vaccination movement (causes real problems, but not the end of the world).
Well, on the one hand, there are positive effects of the technological development such as easy communication for people, which in turn might help strengthen social structure, which I would think, though, do not technically depend on such a centralized structure, but could instead be implemented as federated or peer to peer systems with much the same benefits but without the centralization, using cryptography where possible to protect information from eavesdroppers.
Then, well, yeah, arguably there are areas where lots and lots of centralized data collection in principle might be useful for solving real problems. For example, I would imagine that epidemiological studies would be much easier if researchers had access to all medical records of all people, and possibly that could be useful for fighting certain diseases. But then again, we do have some rules in place that allow collection of such data for the really bad stuff, and statistical analysis of anonymized data, so maybe we aren't really losing all that much.
I think the overall effect is more at the catastrophic end, though I would say it's more of a cold war than a nuclear war, at least in the short term: Surveillance does not directly kill you usually, but it can blow up with horrible consequences.
BTW, your anti-vaccination example might be chosen badly: If the anti-vaccination movement were to gain traction with a majority of people, that could indeed be pretty close to the end of the world, at least the world as we know it. It's only a relatively minor problem (on a societal scale) because relatively few people are taken in by it.
This is not about accepting that everyone has facebook, it is about accepting a very easy way for them to solve a problem they are having, which is people doing illegal stuff through their wifi.
It's pretty much the definition of fascism (corporations assuming Regalian tasks) and the main motivation behind my PhD — so yes, it’s freaky. But it shouldn’t be surprising, not as a first step.
Decades after widespread e-mail and web use, very little progress has been made to improve identity and relation representation with a usable design outside of Facebook and its competitors (currently: LinkedIn, twitter; and the less public: Instagram, WhatsApp, SnapChat, Lime) all self-centric platforms. Open social media initiatives have been legions, from blog-based solutions (mainly on top of WordPress) to some revitalisation of OpenSocialNetwork spearheaded by Facebook employees.
However, standard-based platform hit the contradictions raised by social presentation (what is a friend, a follower, should a non-physical person have identity? what should be public?) even harder than a single platform does. At least a corporate monopoly headed by people who don’t mind being called prejudiced can leap over a semester long nasty debate on gender classification, and focus on scalability and fixing bugs.
My ever-disappointed hope is that after a handful of social network generations, current decision-makers know how to recognise a coming tide. To preserve their market share, they hopefully will support a working standard. In that regard, having Facebook and Instagram under the same super-management might help agree on some commonalities. I believe it will be in their interest to suggest a usable OpenSocial-4.0 that will first let one connect their accounts within one corporate group. In Facebook/Instagram case, that means articulate multiple identity facets, pseudonyms, challenge the need for civil criteria such as age and gender. Moving further out, it might mean discuss information-sharing for ad-targeting to avoid commercial conflict with competitors. Possibly it might finally let you connect your own server, and your multiple identities to your less savvy friends WordPress blog or twitter feed.
From there on, one can hope proving their integrity to their ice-cream clerk without having to bow to too Orwellian institutions. An alternative is to ask a Facebook competitor more agreeable to your concerns (reddit? Moot’s Canvas?) to offer a similar service: though luck if what matters is law enforcement.
That war of leaky abstractions against our own laziness to express our social ties has been raging since the 80’s.
As icky as post-libertarian can seem, their focus on usability helps set up great services far before any more respectful entity can. Rather than code along and be overtaken, we might benefit from learning how to shift those to more open practice later in their development.
"It's pretty much the definition of fascism (corporations assuming Regalian tasks)"
No, it's not. Fascism has an actual meaning and it's not simply "bad people doing things I think are terrible".
"Fascism views political violence, war, and imperialism as a means to achieve national rejuvenation and asserts that stronger nations have the right to expand their territory by displacing weaker nations."[1]
All fascists are hyper-authoritarian, yes, but not all hyper-authoritarians are fascists.
1. requiring people to have a Facebook account to access a free public wi-fi hotspot so they can keep up on Reddit while they eat their fudge brownie ice cream.
2. an authoritarian and nationalistic right-wing system of government and social organization.
No, it's really not. Its very simple to create a fake FB account, all you need is an email. Relying on FB login is a really stupid way to try to verify identity - why not just use emails if they want some minimum bar for registration? FB is no more useful, ubiquitous, or reliable.
I see the attraction for FB in this though; they're desperately trying to come up with ways to avoid becoming the next geocities or myspace - if lots of websites or services use their login or comments system, they will manage to stay at least minimally relevant for a bit longer.
I'm not really sure what's in it for the businesses or consumers though, unless they trust FB...[1]
But it really is good enough for the business offering WiFi.
If Facebook validation causes 90% of their visitors to simply hit the "sign in" button on their default Facebook account, they have probably reduced a significant amount of bad behaviour on their network.
The fact is the vast majority of people are already connected to Facebook under their real names on their mobile device at all times. Taking advantage of that (minimum security through "ease of use") is a good idea.
The person who goes to the store and uses a fake Facebook account to use their free WiFi maliciously is almost non-existent. People will just take the path of least resistance to get what they want now.
Stop thinking of their security in terms of how to circumvent it — that's easy — think of security in terms of which path most users will flow along. Then look at the tradeoffs of increased security to handle the remaining users, you generally find it's not worth bothering with them.
The few people who do want to use the wifi maliciously will find creating a fake Facebook account a very minor obstacle to overcome. Therefore, I don't see what benefits it provides to the business.
How about data provided to the business on who their customers are? This seems especially genius in that it doesn't even require the extra step of "checking-in", but can provide the business with the same data.
This is one of the things that really saddens me about the world we live in - we can't have nice things, because everything is being optimized for the average use case, and average people just don't care much about anything, especially quality.
> average people just don't care much about anything, especially quality.
That's incredibly arrogant, and more than a little naive. The problem is that you're only looking at the small part of the problem which you understand better than the average person. Think about it instead from the perspective of the ice cream shop owner: they're not making any money from your using the network.
You might look at the a Facebook captive portal and think it's a poor choice but your solution isn't optimized for the business needs: cheap, reliable, doesn't require much staff time to support or special skills.
This particular problem is really a tech industry market failure, as there's never been a serious attempt to build a federated authentication system because the general trend has been trying to force a closed system on the world with the goal of levying a tax later. The most successful attempt was OpenID which was fundamentally mis-designed – inexplicably using URLs instead of email addresses – and quickly derailed by the gratuitous complexity crowd. There's a chance we might see things change with Persona but … I'm going to go out on a limb and guess that this won't happen quickly and, even if it does, it's going to be hard to find many developers willing to turn down a Facebook-scale salary to work on a turn-key level captive portal system priced at what an ice-cream shop can afford to pay.
I upvoted you because your point is reasonable and rational, but let me explain what I meant by my comment.
I do not believe that the ice cream shop owner is doing anything wrong - he's optimizing his business to be maximally cost-effective to him. My complaint is about the current "reality", the system overall. We keep focusing on the average user, or on the most common denominator, cutting corners and making everything as cheaply as possible. This is not wrong. I understand the reasons why things are like that, but it doesn't change the fact that it makes me sad.
When you look at movies (or video games), you don't see typos, or subpar, broken, dumb tech. Because the worlds are animated by designers, everything is perfect and beautiful. Real life is not made by graphic designers, but I still wish there was a little more beauty in it. A bit less typos, a bit more caring about quality, long-term consequences and general look&feel. They say that perfect is the enemy of good. We live in a world of good enough. I just wish, on an emotional level, things were a bit closer to the "perfect" side.
(disclaimer: I grew up on Star Trek: TNG and later)
(disclaimer2: this comment is written after a few beers, so please forgive the lousy style)
They certainly are making money if I come and eat ice cream knowing they have free WiFi that I can use, and leave (without buying ice cream) if the WiFi doesn't work for me.
If enough people care about the problem and don't buy ice cream as a result, they will likely change their policy.
I don't think this will ever happen, most will buy ice cream and just leave or sign in with their Facebook profile, which is a win-win for the ice cream shop.
Likely this will just cause enterprising people to look for holes in the registration process, like tunneling traffic over port 53 (old), or any number of other methods, especially when you are talking about a unified login system that forces you to do something with FB.
> They certainly are making money if I come and eat ice cream knowing they have free WiFi that I can use, and leave (without buying ice cream) if the WiFi doesn't work for me.
The key thing is recognizing that the WiFi is closer to advertising rather than the product – the same situation might arise if they don't have enough chairs, the bathroom is out of order, etc. Just as you don't build out 2,000 seats just in case everyone wants to camp out, you don't want to invest more in the WiFi than the incremental percentage of sales lost.
Oh man, you mean we have to rely on 4G LTE connections instead of using the free wifi provided for us? My email may be marginally slower- I don't think I'll ever be able to enjoy ice cream again :-(
It's not even for the average, because too many people are far too stupid for that and things suddenly have to be 'for everyone'. Someone with a 100 IQ and a modicum of common sense can deal with such things as an options menu, despite what google/apple/increasingly MS think.
> Someone with a 100 IQ and a modicum of common sense can deal with such things as an options menu
It depends on previous context. Myself and another career programmer couldn't figure out how to print a wikimedia image on my uncle's Surface RT. (We tried several different apps and looked for context menus in each one to provide a print option.) As far as we could tell, it couldn't be done without dropping to the desktop. My computer-illiterate uncle showed us that you can just pick print from the devices menu in the charms. (Apparently, the Charms is a hybrid system menu and app context menu.)
I didn't have any context for how Win8 works. He did. Then again, I don't know how long it took him to figure it out and whether we would have figured it out eventually.
Despite my bad experience, I'd rather see radical changes in UI instead of sticking with what "works". It only works because we're used to it. Win8 might not be a step in the right direction, but it's might lead to improvements.
Well, a surface is explicitly designed for the lowest common denominator. Context-sensitive menu options that change what they do have always been a bad thing as they completely block development of muscle memory for actions; they would only ever logically be useful for people who are not fully able to remember multiple locations and associate them with different actions.
"Make a completely foolproof product and only a fool will use it".
> Context-sensitive menu options that change what they do
The whole point of a context-sensitive menu is that it changes depending on your context. I think they're usually for advanced users -- often the right click menu just duplicates commands from the global menu but it's a faster access method. The Win8 case requires it for all users.
Of course if the context is the same, then the menu should be the same. (Maybe that was your point?)
Would you have accepted it if the clerk explained to you that requiring a Facebook account is the most hassle-free way that ice cream shop has found, considering a) their technical expertise, b) the amount of time and money they need to invest and c) their average customer?