Hacker Newsnew | past | comments | ask | show | jobs | submit | 2012-07-06login
Stories from July 6, 2012
Go back a day, month, or year. Go forward a day, month, or year.
31.Warby Parker (or, Finding Broken Systems That Are Full of Money) (thinkhard-ly.com)
78 points by yoshizar on July 6, 2012 | 72 comments
32.Introducing Word Lens, for Android (questvisual.com)
76 points by jf on July 6, 2012 | 61 comments
33.Mars One Project: “Humans Will Live On Mars In 2023″ (techli.com)
68 points by thematt on July 6, 2012 | 58 comments
34.Cisco backs down, drops cloud from default router settings (cisco.com)
68 points by boh on July 6, 2012 | 12 comments
35.Announcing Recline.JS: a Javascript library to build data apps in browser (okfn.org)
69 points by romil on July 6, 2012 | 7 comments
36.Samsung profits surge 79% boosted by smartphone sales (bbc.co.uk)
68 points by pettermark on July 6, 2012 | 39 comments

Will it change how the entire app industry works? Not really. Will it make life easier for some developers? Probably. Having used Parse in the past and ultimately dropped it for a custom backend, let's take a look at where Parse fits really well:

1) You're a one-off or small dev shop with no web experience, and making a basic data store with a JSON API as a backend is beyond your technical means (this is by far the best use).

2) Your data isn't that complex (stuff like high scores and the like), so the fact that this is just a big Key/Value store won't harm your performance much.

3) You also don't need any server-side intelligence, just a data store an an API.

4) Your growth and revenue model is in line with the pricing model of the service you selected.

Now here's the things I struggle with, and why where I work when we develop mobile apps, we stay away from Parse/Stackmob/Kinvey/Appcelerator ACS/Whoever. These apply to our customers / projects, and may not be the same for you.

1) We almost always want our server to be intelligent, not just an API-wrapped database. When I want to trigger push notifications, I want to do it with server-side logic, not everything running in the app. Our big beefy AWS servers are way more capable than someone's 2-year-old free Smartphone.

2) It doesn't take long for monthly fees to outpace the cost of just having your own server, or eat into your profitability. Most of these services work out to about 3-5 cents per user per month. Depending on the size and complexity of the app, and what the revenue generated from it looks like, that can be vastly more expensive than a small AWS or Heroku deployment.

3) It's rare that we need a data store as simple as "users + key/value". It's one thing to build a basic chat app or worldwide high score in a system like that. It's another to try and build an inventory tool or CRM.

4) Mobile data connectivity is high-latency and unreliable. Making a dozen API calls to populate a single screen is not only inefficient, it can be very frustrating to your users. An API call for a screen/view/fragment should return exactly the data needed, no more, no less. Let the database do what databases do best: sort and collect data. Let the phone just display it.

So really, for small apps or apps with relatively simple data requirements, Parse does a fine job, especially if you don't have server people on your team (in which case, BaaS is your only real option). If the app's data is going to be complex, require server-side logic or have any kind of complexity that would involve optimized API calls? You're probably better building your own.


It was clear which camp you fell into just by the way you defined the two.
39.Improving Linux performance by preserving Buffer Cache State (oetiker.ch)
61 points by osivertsson on July 6, 2012 | 9 comments
40."Don't f***ing trim the copy" (jessepollak.me)
54 points by jessepollak on July 6, 2012 | 61 comments
41.Powerful targeting for iOS push notifications (parse.com)
53 points by bjacokes on July 6, 2012 | 4 comments

I think the takeaway from this should be slightly different than the one the author found: don't fight users' expectations of what your product is.

When Chrome came out, it was a new product; users had no expectations of "what Chrome is." So Google could do with it essentially whatever they wanted.

Firefox was in a completely different position. When Mozilla moved Firefox to rapid release, the product branded "Firefox" had been in users' hands for seven years -- nine if you count pre-1.0 versions. Over that time, users' expectations of what a product branded "Firefox" was settled into a particular place.

Rapid release was painful because it broke those expectations. Suddenly Firefox didn't behave like people expected a product labeled "Firefox" to behave anymore. It's like opening a bottle labeled "Coke" and having orange juice pour out. "But orange juice is better for you!" Yeah, but that's still not what I expect to get when I open a Coke bottle.

There's a simple way Mozilla could have avoided this: just call the rapid-release product something else. Create a new brand, and put it on a version of the browser that receives updates every six weeks. Call it Fastfox or Frequentfox or really anything other than Firefox. Then encourage users to start moving from Firefox to the new hotness. Make the new product compatible with Firefox extensions, but don't do the version-number compatibility check that Firefox does, so users aren't constantly being prompted to update working addons.

(Yes, both products are the same code under the hood. That doesn't matter. The important thing is that you communicate to users that this is a thing that is something other than Firefox, which resets their expectations.)

Eventually you'd have most users on the new product, since that's where the cool new features that get users excited would be showing up. Curmudgeons and enterprises would stay with boring old Firefox, but that's OK, because you make "Firefox" just a periodic snapshot of Frequentfox development. "Firefox" becomes a legacy brand, maintained for those who care about it. But the new brand is clearly established as the new hotness.

This lets you move your users without violating their expectations. They expect the new product to behave differently, because it's a new product. It's got a different name and everything!

Violent changes in direction for an established product, on the other hand, always tick off users, because they do violence to those users' expectations of what that product is.

It's better to send your well-loved legacy brand gracefully off into the sunset, in other words, than to try and shock new life into it with electric paddles.

43.Designing Docs for Developers (trigger.io)
59 points by amirnathoo on July 6, 2012 | 3 comments
44.Argentina bans dollar purchases for savings (bloomberg.com)
53 points by wslh on July 6, 2012 | 97 comments

Having been in this situation before (albeit never missing something as monumental as the birth of my child), I would add another reason for this type of faulty prioritization : vanity.

It's so easy to be tempted by the notion that I am in fact so crucial and important to the business at hand that I have to attend. Flaunting the notion of my own importance was even bizarrely empowering. It's of course completely hollow and meaningless, and ultimately self-defeating.

46.Kim Dotcom Can See [Only] One File Of 22 Million Says FBI (stuff.co.nz)
48 points by chaostheory on July 6, 2012 | 10 comments
47.On Extensions, Userscripts, and Archivers (4chan.org)
48 points by saxamaphone69 on July 6, 2012 | 38 comments

Posner's right. It seems that there needs to be some new notion of what constitutes patentability.

Drugs are an interesting case as they have these factors going for it:

1) Expensive to bring IP to market. Lots of testing and clinical trials for drugs. For SW it is pretty cheap now.

2) Easy and cheap to copy. Generic versions can be reverse-engineered quickly. Probably just as easy to copy software, but I still feel like this is probably a useful pillar.

3) The IP by itself constitutes the majority of the value of the product. In medicine there isn't typically tons of other IP around that come together to form the product. In SW there is rarely a single piece of IP that is more than a small fraction of the value of the product.

4) The IP has longevity as a standalone product. Viagra can be sold for decades. Aspirin still probably does hundreds of millions in revenue. There is little SW IP that, by itself, has longevity. The nature of SW is to continuously improve it.

5) Time to market isn't a huge advantage. Since most medicine is just sold as effectively a commodity, being 6m ahead of your competition usually just means you have 6 extra months of revenue. Whereas in SW it also means that gives you 6 months to build on your current IP. In medicine you don't typically do Viagra 2.0, with a boatload of new IP that makes the original obsolete (and hence any competitors shipping the old version scrambling).

49.Berners-Lee: World Finally Realizes The Web Belongs To No One (wired.com)
50 points by coderdude on July 6, 2012 | 2 comments
50.Samsung wins bid to sell Nexus in court (yahoo.com)
45 points by keltex on July 6, 2012 | 17 comments

If you think that's cool, look at Daeken's Magister: http://demoseen.com/windowpane/magister.png.html

A PNG that's interpreted as HTML and loads itself as compressed JavaScript!


It is possible, but the neurons are actually complex computing units that take a plethora of signals into account: subtle temporal behavior (relative timings of post and pre-synaptic activations), complex chemistry in the cell as well as the in the synaptic cleft, and many more less understood things. Secondly, as you can imagine, the connectivity is far from random. Like in larger nervous systems (or even more than in large pools or neurons), these computing units take precise roles, part due to the developmental process, part due to later stage learning. But in the end, the machinery is precise, and simulating it requires understanding all of it.

I believe that this understanding should not come though imagery alone, but also through studies of the development of these organisms' brains. Like we are now forming artificial neural maps through "machine" learning techniques that are accelerated versions of their biological counterparts, I believe that we should develop developmental algorithms into large-scale simulations. The goals being to first understand how it works, second to see if the models are able to replicate the end result, a good indication that they are useful models.

(I do research in computational neuroscience.)

53.New Data Science eBook – Free and Open-Source (datascience101.wordpress.com)
45 points by swGooF on July 6, 2012 | 3 comments
54.Streak Releases Mâché: Easy Log Analysis for App Engine (streak.com)
44 points by alooPotato on July 6, 2012 | 5 comments
55.Puzzling outcomes in A/B testing (glinden.blogspot.co.uk)
43 points by ot on July 6, 2012 | 4 comments

The other thing is that Chrome updates work. They've only broken something once in I think 3 years of daily use at this point (some update unchecked "warn before quitting"). They also don't generally create popups and the update process itself is literally invisible.

Firefox on the other hand... sigh. I really liked that browser, but the updates, and the broken plugins, and the shit memory management, and the developers' heads up their shitty asses denying their shit memory management, and did I mention the enormous memory usage on OSX? Chrome essentially has the same uptime as my laptop, ie months. Firefox required daily restarts with the same browsing patterns. Plus they regularly corrupt the backing file that holds the sites you had open in your browser, so before restarting the browser to get ram back, it's a best practice to copy the location of each and every open tab to a text file.

We got my gf's laptop 8 gigs of ram to accomodate ff and I'm working on talking her into chrome. A browser written by people who know how to use free(3). Plus the updates don't hork random stuff. Plus on the rare occasions I've restarted the browser it has never broken the backing store holding the open tabs.


I don't like Software Architects.

What is implicit in the SA role is that the rest of the dev team is full of dummies that can't think strategically or architect big parts of the system.

This naturally leads to the SA producing ever higher and more complicated abstractions that the simian developer tries to implement, poorly, and the SA becomes an irreplaceable high priest -- inflating their salaries and prominence and depressing the salaries of the rest of the team.

Where I work, every developer is expected to be "architect-quality". Sure, more junior folks don't have the experience, but they're expected to have the capacity for strategic, high-level thinking. As they mature, they take on more and more architectural decisions. If they can't do that, then we've hired the wrong person.

Ultimately, that's the only way I know how to scale dev teams and produce amazing software. The SA/Dev stratification is an anti-pattern.


On the opposite side of things, here are some of the largest single-celled organisms:

http://en.wikipedia.org/wiki/Xenophyophore http://en.wikipedia.org/wiki/Caulerpa https://en.wikipedia.org/wiki/Valonia_ventricosa

59.Hamster - Efficient, Immutable, Thread-Safe Collection classes for Ruby (github.com/harukizaemon)
39 points by justinweiss on July 6, 2012 | 30 comments

Actually, Firefox developers care a good deal about memory management. More than Add-on authors unfortunately.

We've had a project called MemShrink running for about a year and it has been making steady progress. and that progress has been shipping to users.

The current Aurora channel release even has a fix for most add-on bad behavior.

Firefox actually stacks up really well in memory usage (even in third party tests).

(I work for Mozilla but was not involved in MemShrink. Also, I wrote this on my phone, so I hope there are no embarrassing misspellings. )


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: