Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm sorry to contribute to the dogpile effect (long thread probably says same thing I'm about to say, but I didn't see it..), but..

devs estimate known risks. Ideal path + predictable delays. The further reaches of the long tail are the unknown risks.

known risks are estimated based on knowledge (hence, a question for a dev), unknown risks are just an adjustment parameter on top of that estimate, possibly based on historical evidence (there is no reason a dev could estimate any better).

It should be managements job to adjust a dev estimate. Let's be real here - I've never heard a real life example of management using stats for this kind of thing, or being enthusiastic about devs doing the same.

Perhaps if management is taken seriously as a science, things will change, but I doubt it.

<strong_opinion type="enterprise_software_methodology_cynicism">

Bizness is all about koolaid-methodology-guru management right now, very much the bad old ways - a cutting example of workable analytical management would be needed for things to change, but this is unlikely as all the stats people are getting high pay cool ML jobs, and aren't likely to want to rub shoulders with kool-aider middle managers for middle-management pay..

</strong_opinion>



It's actually not super uncommon for management to use statistical tools for this, although they may not realize what exactly is going on. For example there are a couple of tools whose names escape me at the moment that extend or wrap MS Project but have a statistical time estimate piece based on quintile time estimates (e.g. your engineers give you 50% and 90% conf estimates, it applies simple modeling).

I've also been at one shop where we used a more sophisticated modeling approach, but built in house. Reception was warm in at least most of management.

So my experience isn't that these tools don't exist, or that management is unwilling to use them - but rather that management has trouble accepting the implications of using them properly. Specifically, if any of the inputs of the model change, you should update those and re-run the model. With linear (e.g bullshit) GANTT, you just say "oh, we missed this week of work, move the endpoint forward a week". With more careful modelling adding an unexpected dependency or a couple days work in the wrong place can suddenly add 3 months to your worst-case model. Exec's really don't like that happening without a whole song and dance about why, so there is a tendency to "freeze" the models in ways that they become progressively less useful. Worst case they are asymptotic on meaningless.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: