Hacker Newsnew | past | comments | ask | show | jobs | submit | virgilp's commentslogin

Waterfall was bad due to the excessively long feedback loops (months-to-years from "planning" to "customer gets to see it/ we receive feedback on it"). It was NOT bad because it forced people to think before writing code! That part we should recover, it's not problematic at all.

If people actually read the original paper by Royce 1970 they would see that it's an iterative process with short feedback-loops.

The bad rep comes from (defense|gov.) contracting, where PRDs where connected to money and CR were expensive, see http://www.bawiki.com/wiki/Waterfall.html for better details.


When you do most of the thinking before you start implementing the whole thing, and if you think that that's enough, then you've missed the unknown unknowns part, which was a big talking point in the mid 2000s, back when the anti-waterfall discourse got going (and for good reason).

But I expect the AI zealots to start (re-)integrating XProgramming (later rebranded as Agile) back into their workflow, somehow.


qa has long ago merged with programming in "unified engineering". Also with SRE ("devops") and now the trend is to merge with CSE and product management too ("product mindset", forward-deployed engineers). So yeah, pretty much, that's the trend. What would you trust more - an engineer doing project management too - or a project manager doing the engineering job?

The PMs and QAs I know would disagree with that assessment.

> What would you trust more - an engineer doing project management too - or a project manager doing the engineering job?

If one of the three, {PM, QA, coder}, was replaced by AI, as a customer I'd prefer to pick the team missing the coder. But for teams replacing two roles with AI, I'd rather keep the coder.

But a deeper problem now is, as a customer, perhaps I can skip the team entirely and do it all myself? That way, no game of telephone from me to the PM to the coder and QA and back to me saying "no" and having another expensive sprint.


If I'm managing a company of about 10 people to do something in the physical world, I'd probably skip the PM & QA and hire the engineer and have the engineer task the LLM with QA given a clear set of requirements and then manage the projects given a clear set of deadlines.A good SE can do a "good enough" job at QA and PM in a small company that you won't notice the PM & QA is missing. But the PM & QA can always be added or QA can be augmented with a specialist assuming you're LLM-driven.

Of course if none of your software projects are business-critical to the degree that downtime costs money pretty directly then you can skip it all and just manage it yourself.

The other thing you should probably understand is that the feedback cycle for an LLM is so fast that you don't need to think of it in terms of sprints or "development cycles" since in many cases if you're iterating on something your work to acceptance test what you're getting is actually the long pole, especially if you're multitasking.


> If one of the three, {PM, QA, coder}, was replaced by AI, as a customer I'd prefer to pick the team missing the coder.

I am curious: why? In all my years of career I've seen engineers take on extra responsibilities and doing anywhere from decent to fantastic job at it, while people who tend to start much more specialized (like QA / sysadmins / managers) I have historically observed struggling more -- obviously there are many and talented exceptions, they just never were the majority, is my anecdotal evidence.

In many situations I'd bet on the engineer becoming a T-shaped employee (wide area of surface-to-decent level of skills + a few where deep expertise exists).


> The PMs and QAs I know would disagree with that assessment.

It just depends on the org structure and what the org calls different skills. In lots of places now PM (as in project, not product) is in no way a leadership role.


QA is still alive and well in many companies, including manual QA. I'm sure there's a wide range these days based on industry and scale, but you simply don't ship certain products without humans manually testing it against specs, especially if its a highly regulated industry.

I also wouldn't be so sure that programming is the hardest of the three roles for someone to learn. Each role requires a different skill set, and plenty of people will naturally be better at or more drawn to only one of those.


From my experience with modern software and services, the actual practice of QA has plainly atrophied.

In my first gig (~30 years ago), QA could hold up a release even if our CTO and President were breathing down their necks, and every SDE bug-hunted hard throughout the programs.

Now QA (if they even exist) are forced to punt thousands of issues and live with inertial debt. Devs are hostile to QA and reject responsibility constantly.

Back to the OP, these things aren't calculable, but they'll kill businesses every time.


Totally feel this. We're trying to fill that gap with Autonoma (https://www.getautonoma.com/), open-source AI agents that run E2E tests in real browsers without needing to write or maintain test scripts.

Continuous delivery really killed QA.

that's not the role of QA to be a gatekeeper, they give the CTO and President information on the bugs and testing but it's a business decision to ship or not

I’m not a native English speaker, but isn’t gatekeeping exactly that? Blocking suspicious entities unless they’re allowed through by someone higher in the hierarchy?

QA merged originally out of programming.

emerged?

Actually, no. We always needed good checks - that's why you have techniques like automated canary analysis, extensive testing, checking for coverage - these are forms of "executable oracles". If you wanted to be able to do continuous deployment - you had to be very thorough in your validation.

LLMs just take this to the extreme. You can no longer rely on human code reviews (well you can but you give away all the LLM advantages) so then if you take out "human judgement" *from validation*[1], you have to resort to very sophisticated automated validation. This is it - it's not about "inventing a new language", it's about being much more thorough (and innovative, and efficient) in the validation process.

[1] never from design, or specification - you shouldn't outsource that to AI, I don't think we're close to an AI that can do that even moderately effective without human help.


If the LLM generates code exactly matching a specification, the specification becomes a conventional programing language. The LLM is just transforming from one language to another.


Yes, but a programming language with a proverbial sufficiently smart compiler. That is very useful.


Try writing an exhaustive spec for anything non-trivial and you might see the problem.


Been saying this for a while now. I work in aerospace, and I can tell you from first hand experience software engineers don't know what designing a spec is.

Aero, mechanical, and electrical engineers spend years designing a system. Design, requirements, reviews, redesign, more reviews, more requirements. Every single corner of the system is well understood before anything gets made. It's a detailed, time consuming, arduous process.

Software engineers think they can duplicate that process with a few skills and a weekend planning session with Claude Code. Because implementation is cheaper we don't have to go as hard as the mechanical and electrical folks, but to properly spec a system is still a massive amount of up front effort.


And software isn't as constrained by physics as hardware, which massively expands both the design space as well as how many ways things can go wrong.


Llm boys discover the halting problem!


I honestly don't see how this is related? Nothing says "one shot a full system from a perfect specification", I don't think this was ever a goal (or that it will be practical to do so)


Also: if that one particular AI-produced compiler has nothing innovative, that only means that the human "director" behind the AI didn't ask it to produce anything innovative; what it does not mean is that AI can never produce anything innovative in a compiler.


> if that one particular AI-produced compiler has nothing innovative, that only means that the human "director" behind the AI didn't ask it to produce anything innovative

Couldn't it also be true that the AI didn't produce innovative output even though the human asked it to produce something innovative?

Otherwise you're saying an AI always produces innovative output, if it is asked to produce something innovative. And I don't think that is a perfection that AI has achieved. Sometimes AI can't even produce correct output even when non-innovative output is requested.


> Couldn't it also be true that the AI didn't produce innovative output even though the human asked it to produce something innovative?

It could have been, but unless said human in this case was lying, there is no indication that they did. In fact, what they have said is that they steered it towards including things that makes for a very conventional compiler architecture at this point, such as telling it to use SSA.

> Otherwise you're saying an AI always produces innovative output

They did not say that. They suggested that the AI output closely matches what the human asks for.

> And I don't think that is a perfection that AI has achieved.

I won't answer for the person you replied to, but while I think AI can innovate, I would still 100% agree with this. It is of course by no means perfect at it. Arguably often not even good.

> Sometimes AI can't even produce correct output even when non-innovative output is requested.

Sometimes humans can't either. And that is true for innovation as well.

But on this subject, let me add that one of my first chats with GPT 5.1, I think it was, I asked it a question on parallelised parsing. That in itself is not entirely new, but it came up with a particular scheme for paralellised (GPU friendly) parsing and compiler transformations I have not found in the literature (I wouldn't call myself an expert, but I have kept tabs on the field for ~30 years). I might have missed something, so I intend to do further literature search. It's also not clear how practical it is, but it is interesting enough that when I have time, I'll set up a harness to let it explore it further and write it up, as irrespective of whether it'd be applicable for a production compiler, the ideas are fascinating.


"Waterfall" got a bad rep because it meant "we stay months in the requirements gathering, then months design phase, then months in development, then months in validation". If you compress "months" to days/hours, what you obtain is something that nobody from the 90s would recognize as "waterfall"; it is not the end of agility, far from it.


Cool but it is not a framework for working with AI, it is an _opinionated_ framework for building full-stack apps right? As in, I can't use any of it if I'm building, say, a Spark data processing pipeline. Or a ML framework. Or automation software that runs on custom processors.

The idea of "guardrails outside the model" is definitely appealing but I wonder if you can make it generalize well.


You’re right of course, the framework is pretty rigidly tied to full stack.

But the underlying idea I think has power - find a process you can codify enforcement of rather than telling models how they should do things.

Spec driven development probably means creating tooling to track how code maps to specs, for example.. and then working out how to manage that as data. Then you can query the data to confirm all mappings between code and specs. That gets you out of the business of nicely and repeatedly asking very expensive and undependable models such queries :)


Nothing of what you write here matches my experience with AI.

Specification is worth writing (and spending a lot more time on than implementation) because it's the part that you can still control, fully read, understand etc. Once it gets into the code, reviewing it will be a lot harder, and if you insist on reviewing everything it'll slow things down to your speed.

> If the cost of writing code is approaching zero, there's no point investing resources to perfect a system in one shot.

THe AI won't get the perfect system in one shot, far from it! And especially not from sloppy initial requirements that leave a lot of edge (or not-so-edge) cases unadressed. But if you have a good requirement to start with, you have a chance to correct the AI, keep it on track; you have something to go back to and ask other AI, "is this implementation conforming to the spec or did it miss things?"

> five different versions of the thing you're building and simply pick the best one.

Problem is, what if the best one is still not good enough? Then what? You do 50? They might all be bad. You need a way to iterate to convergence


Same, I've sorta ended up converging on make a rough plan, get second and third opinions from various AI's on it, sort of decide and make choices while shaping the plan, which we turn into a detailed specsheet. Then follow the 'how to design programs' method which is mostly writing documentation first, then expected outcomes, then tests, then the functions, then test the flow of the pipeline. This usually looks like starting with Claude to write the documentation, expectations and create the scaffolding, then having Gemini write the tests and the code, then have codex try to run the pipeline and fix anything it finds that is broken along the way. I've found this to work fairly well, it's looser than waterfall, but waterfall-ish, but it's also sort of TDD-ish, and knowing that there will be failures and things to fix, but it also sort of knows the overall strategy and flow of how things will work before we start.


This. Waterfall never worked for a reason. Humans and agents both need to develop a first draft, then re-evaluate with the lessons learned and the structure that has evolved. It’s very very time consuming to plan a complex, working system up front. NASA has done it, for the moon landing. But we don’t have those resources, so we plan, build, evaluate, and repeat.


That "first draft" still has to start with a spec. Your only real choice is whether the spec is an actual part of project documentation with a human in the loop, or it's improvised on the spot within the AI's hidden thinking tokens. One of these choices is preferable to the other.


I agree, and personally I often start with a spec. However, I haven’t found it useful to make this very detailed. The best ROI I‘ve been getting is closing the loop as tightly as possible before starting work, through very elaborate test harnesses and invariants that help keeping the implementation simple.

I‘d rather spend 50% of my time on test setup, than 20% on a spec that will not work.


So, rollback and try again with the insight.

AI makes it cheap to implement complex first drafts and iterations.

I'm building a CRM system for my business; first time it took about 2 weeks to get a working prototype. V4 from scratch took about 5 hours.


AI is also excellent at reverse engineering specs from existing code, so you can also ask it to reflect simple iterative changes to the code back into the spec, and use that to guide further development. That doesn't have much of an equivalent in the old Waterfall.


Yeah, if done right. In my experience, such a reimplementation is often lossy, if tests don’t enforce presence of all features and nonfunctional requirements. Maybe the primary value of the early versions is building up the test system, allowing an ideal implementation with that in place.

Or put this way: We’re brute forcing (nicer term: evolutionizing) the codebase to have a better structure. Evolutionary pressure (tests) needs to exist, so things move in a better direction.


What matters ultimately is the system achieves your goals. The clearer you can be about that the less the implementation detail actually matters.

For example; do you care if the UI has a purple theme or a blue one? Or if it's React or Vur. If you do that's part of your goals, if not it doesn't entirely matter if V1 is Blue and React, but V4 ends up Purple and Vue.


are you intentionally being vague here becuase it's a HN comment and you can't be arsed going into detail?

or do you literally type

> Look at the git repo that took us 2 weeks, re-do it in another fresh repo... do better this time.

I think you don't and that your response is intentional misdirection to pointlessly argue against the planning artifact approach.


> Waterfall never worked for a reason

We're going to need some evidence for this claim. I feel like nearly 70 years of NASA has something to say about this.


While writing the comment, I did think to myself, that NASA did a ton of prototypes to de-risk. They simulated the landing as close as they could possibly make it, on earth. So, probably not pure waterfall either. Maybe my comment was a bit too brusque in that regard.


Hehe. Well it's funny because years and years ago I made a similar comment to a person who I highly respected and had way more experience than me and he looked right at me and said, "No, waterfall does work." And it really made an impression on me.

But yeah, there's a point at which you have to ask where waterfall begins and agile ends in reality it's blurry. Prototyping is essentially in most nontrivial problems, so if you count prototyping as agile then shrug but ultimately it doesn't matter what we call it


It does say - you will never have the time and resources of NASA


> NASA has done it, for the moon landing.

Which one? The one in 1960s or the one which has just been delayed - again?

I think you can just as well develop a first spec and iterate on than coding up a solution, important is exploration and iteration - in this specific case.


Iterating on paper in my experience never captures the full complexity that is iteratively created by the new constraints of the code as it‘s being written.


"Waterfall" was primarily a strawman that the agile salesman made up. Sure, it existed it some form but was not widely practiced.


You claim to disagreeing with OP but you seem to be describing basically the same core loop of planning and execution.

Doing OODA faster has always been the key thing to creating high quality outcomes.


No, OP literally claims "you can't spec out something you have no clue how to build"; I claim that on the contrary, you absolutely can - you don't need to know "how to build" but you need to clarify what you want to build. You can't ask AI to build something (and actually obtain a good "something") until you can say exactly what the said "something" is.

You iterate, yes - sometimes because the AI gets it wrong; and sometimes because you got it wrong (or didn't say exactly what you wanted, and AI assumed you wanted something else). But the less specific and clear you are in your requirements, the less likely it is you'll actually get what you want. With you not being specific in the requirements, it only really works if you want something that lots of people are building/have built before, because that will allow the AI to make correct assumptions about what to build.


>THe AI won't get the perfect system in one shot, far from it! And especially not from sloppy initial requirements that leave a lot of edge (or not-so-edge) cases unadressed. But if you have a good requirement to start with, you have a chance to correct the AI, keep it on track; you have something to go back to and ask other AI, "is this implementation conforming to the spec or did it miss things?"

This is an antiquated way of thinking. If you ramp up the number of agents you're using the auto-correcting and reviewing behavior kicks in which makes for much less human intervention until the final code review.


Yes, but what about the "spec-review"? Isn't that even more important? Is the system doing what we (and its users) need it to be doing?


> Nothing persists after the session ends.

Does that mean that if I exit claude code and then later resume the session, the database is already lost? When exactly does the session end?


Yes — the database is tied to the MCP server process, so it's created fresh on each claude launch and lost when you exit; resuming a session starts a new process with a new empty database.


> after my old Volvo dies

That's another 20 years mate.


That's not how things work in practice.

I think the concern is not that "people don't know how everything works" - people never needed to know how to "make their own food" by understanding all the cellular mechanisms and all the intricacies of the chemistry & physics involved in cooking. BUT, when you stop understanding the basics - when you no longer know how to fry an egg because you just get it already prepared from the shop/ from delivery - that's a whole different level of ignorance, that's much more dangerous.

Yes, it may be fine & completely non-concerning if agricultural corporations produce your wheat and your meat; but if the corporation starts producing standardized cooked food for everyone, is it really the same - is it a good evolution, or not? That's the debate here.


Most people have no idea how to hunt, make a fire, or grow food. If all grocery stores and restaurants run out of food for a long enough time people will starve. This isn't a problem in practice though, because there are so many grocery stores and restaurants and supply chains source from multiple areas that the redundant and decentralized nature makes it not a problem. Thus it is the same with making your own food. Eventually if you have enough robots or food replicators around knowing how to make food becomes irrelevant, because you always will be able to find one even if yours is broken. (Note: we are not there yet)


>If all grocery stores and restaurants run out of food for a long enough time people will starve. This isn't a problem in practice though...

I fail to see how this isn't a problem? Grid failures happen? So do wars and natural disasters which can cause grids and supply chains to fail.


That is short hand. The problem exists of course, but it is improbable that it will actually occur in our lifetimes. An asteroid could slam into the earth or a gamma ray burst could sanitize the planet of all life. We could also experience nuclear war. These are problems that exist, yet we all just blissfully go on about our lives, b/c there is basically nothing that can be done to stop these things if they do happen and they likely won't. Basically we should only worry about these problems in so much as we as a species are able to actually do something about them.


If they are at small scale then it's fine.

If it's at large scale then millions die of starvation.


> Most people have no idea how to hunt, make a fire, or grow food

That's a bizarre claim, confidently stated.

Of course I can make a fire, cook and my own food. You can, too. When it comes to hunting, skinning and the cutting of animals, that takes a bit more practice but anyone can manage something even if the result isn't pretty.

If stores ran out of food we would have devastating problems but because of specialization, just because we live in cities now you simply can't go out hunting even if you wanted to. Plus there is probably much more pressing problems to take care of, such as the lack of water and fuel.

If most people actually couldn't cook their own food, should they need, that would be a huge problem. Which makes the comparison with IT apt.


I don't think they're saying _you_ can't do those things, just that most people can't which I have to agree with.

They're not saying people can't learn those things either, but that's the practice you're talking about here. The real question is, can you learn to do it before you starve or freeze to death? Or perhaps poison yourself because you ate something you shouldn't or cooked it badly.


Can you list a situation where this matters that you know this personally?

Maybe if you end up alone and lost in a huge forest or the Outback, but this is a highly unlikely scenario.

If society falls apart cooking isn’t something you need to be that worried about unless you survive the first few weeks. Getting people to work together with different skills is going to be far more beneficial.


The existential crisis part for me is that no-one (or not enough people) have the skills or knowledge required to do these things. Getting people to work together only works if some people have those skills to begin with.

I also wasn't putting the focus is on cooking, the ability to hunt/gather/grow enough food and keep yourself warm are far more important.

And you are far more optimistic about people than me if you think people working together is the likely scenario here.


>the ability to hunt/gather/grow enough food and keep yourself warm are far more important

These are very important when you're alone. Like deep in the woods with a tiny group maybe.

The kinds of problems you'll actually see are something going bad and there being a lot of people around trying to survive on ever decreasing resources. A single person out of 100 can teach people how to cook, or hunt, or grow crops.

If things are that bad then there is nearly a zero percent change that any of those, other than maybe clean water, are going to be your biggest issue. People that do form groups and don't care about committing acts of violence are going to take everything you have and leave you for dead if not just outright kill you. You will have to have a big enough group to defend your holdings 24/7 with the ability to take some losses.

Simply put there is not enough room on the planet for hunter gathers and 8 billion people. That number has to fall down to the 1 billion or so range pretty quickly, like we saw around the 1900s.


The well known SHTF story that summarises your point written by a guy who lived in Sarajevo:

https://www.scribd.com/document/110974061/Selco-s-Survival

From a real situation, only alluding to the true horrors of the situation.


> The real question is, can you learn to do it before you starve or freeze to death? Or perhaps poison yourself because you ate something you shouldn't or cooked it badly.

You can eat some real terrible stuff and like 99.999% of the time only get the shits, which isn't really a concern if you have good access to clean drinking water and can stay hydrated.

The overwhelming majority of people probably would figure it out even if they wind up eating a lot of questionable stuff in the first month and productivity in other areas would dedicate more resources to it.


You're not going to be any good for hunting, farming or keeping warm if you have the shits though.


You think that the majority of people actually know how to do those things successfully in the absence of modern logistics or looking up how to do it online?

I have a general idea of how those things work, but successfully hunting an animal isn't something I have ever done or have the tools (and training on those tools) to accomplish.

Which crops can I grow in my climate zone to actually feed my family, and where would I get seeds and supplies to do so? Again I might have some general ideas here but not specifics about how to be successful given short notice.

I might successfully get a squirrel or two, or get a few plants to grow, but the result is still likely starvation for myself and my family if we were to attempt full self-reliance in those areas without preparation.

In the same way that I have a general idea of how CPU registers, cache, and instructions work but couldn't actually produce a working assembly program without reference materials.


I mean before you stave to death because you don’t have food in your granary from last year, you don’t even have the land to hunt or plant food so it’s not even relevant


Ok, poof. Now everyone knows how to hunt, farm, and cook.

What problem does this solve? In the event of breakdown of society there is nowhere near enough game or arable land near, for example, New York City to prevent mass starvation if the supply chain breaks down totally.

This is a common prepper trope, but it doesn't make any sense.

The actual valuable skill is trade connections and community. A group of people you know and trust, and the ability to reach out and form mini supply chains.


I don't think that comment is advocating for most people to be able to do these things or stating that this is a problem.

In fact it says "This isn't a problem in practice though"


> This is a common prepper trope, but it doesn't make any sense.

In case the supply chain breaks, preppers don't want to be the ones that starve. They don't claim they can prevent mass starvation.

(Very off topic from the article)


Preppers are maybe the worst of the nonsense cosplay subcultures in modern memory. The moment things go south the people who come out ahead are always the people able to convince and control their fellow humans. The weirdo in the woods with the bunker gets his food stolen on like day 12. The post apocalypse warlord makes it through just fine. Better, maybe!

The key to survival has always been tribal dynamics. This wouldn't change in the apocalypse.


> Most people have no idea how to hunt, make a fire, or grow food. If all grocery stores and restaurants run out of food for a long enough time people will starve.

I doubt people would starve. It's trivial to figure out the hunting and fire part in enough time that that won't happen. That said, I think a lot of people will die, but it will be as a result of competition for resources.


People would absolutely starve, especially in the cities.

It’s just not possible to feed 8 billion people without the industrial system of agriculture and food distribution. There aren’t enough wild animals to hunt.


If I could hunt, it wouldn't actually matter, because nearly all the animals I would want are in stables. So all I would need to do is find a large enough rock and throw it at them, until they die. The much larger problem would be to keep all the other humans from doing that before me.


In Star Trek they just 3D printed everything via light.


At what point is the threshold between fine and concerning? Seems like the one you put is from your point of view. I’m sure not everyone would agree and is subjective.


> that's a whole different level of ignorance, that's much more dangerous.

Why? Is it more dangerous to not know how to fry an egg in a teflon pan, or on a stone over a wood fire? Is it acceptable to know the former but not the latter? Do I need to understand materials science so I can understand how to make something nonstick so I’m not dependant on teflon vendors?


It's relative, not absolute. It's definitely more dangerous to not know how to make your own food than to know something about it - you _need_ food, so lacking that skill is more dangerous than having it.

That was my point, really - that you probably don't need to know "materials science" to declare yourself competent enough in cooking so that you can make your own food. Even if you only cooked eggs in teflon pans, you will likely be able to improvise if need arises. But once you become so ignorant that you don't even know what food is unless you see it on a plate in a restaurant, already prepared - then you're in a lot poorer position to survive, should your access to restaurants be suddenly restricted. But perhaps more importantly - you lose the ability to evaluate food by anything other than aspect & taste, and have to completely rely on others to understand what food might be good or bad for you(*).

(*) even now, you can't really "do your own research", that's not how the world works. We stand on shoulders of giants - the reason we have so much is because we trust/take for granted a lot of knowledge that ancestors built up for us. But it's one thing to know /prove everything in detail up until the basic axioms/atoms/etc; nobody does that. And it's a completely different different thing to have your "thoughts" and "conclusions" already delivered to you in final form by something (be it Fox News, ChatGPT, New York Times or anything really) and just take them for granted, without having a framework that allows to do some minimal "understanding" and "critical thinking" of your own.


When it comes to food prep, I'd agree with you that the more time of your life passes, the more irresponsible is the risk of not knowing how to fry an egg, for example.

At the same time, you only need to learn how to fry an egg once, and you won't forget it. You can go your entire life without ever having to fry an egg yourself - but if you ever had to, you could.

When it comes to coding, the analogy breaks down, I think. Aside from the obviously different stakes (survival versus control of your device), coding also requires keeping up with a lot of changing domain knowledge. It'd be as if an egg is one week savoury, another week sweet, and another a poisonous mushroom. It's also less of a single skill like writing a for loop, and more of a combination of skills and experiments, like organizing a banquet.

Coding today suffers from having too many types of eggs, many of which exist because some communities prefer them. I also don't like the solution "let the LLM do it", but it's much easier. Still, if we manage to stabilize patterns for the majority of use cases, frying the proverbial egg will no longer be as much of domain knowledge, choice or elitism as it is today.


You do need to be able to understand nonstick coating is unhealthy and not magic. You do need to understand your options for pan frying for not sticking are a film of water or an ice cube if you don't want to add an oil into the mix. Then it really depends what you are cooking on how sticky it will be and what the end product will look like. That's why there are people that can't fry an egg, people that cook, chefs, and Michelin chefs. Because nuance matters, it's just that the domain where each person wants to apply it is different. I dont care about nuance in hockey picks but probably some people do. But some domains should concern everyone.


> You do need to be able to understand nonstick coating is unhealthy and not magic.

Prove it. Please, show me a method by which polytetrafluoroethylene is going to kill me. Because if you're like everyone else moaning about "plastic bad" online, you'll be wrong, and if you have some secret insight that no one else has, I'd love to hear it. But a basic understanding of chemistry reveals that PTFE is functionally inert. It doesn't react with damn near anything, it needs heats well in excess of anything you should be exposed to cooking to melt or burn, and even if you were eating the stuff straight, the whole "inert" thing applies to just about any digestive process your body could apply to it, too.


>You do need to be able to understand nonstick coating is unhealthy and not magic

Will it kill you faster than you can birth and raise the next generation?

If it's something that kills you at 50 or 60, then really it doesn't matter that much as evolution expects you to be a grandparent by then.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: