I take issue with point 4 "Rejecting of Best Practices". The article states "Simply put, best practices are non negotiable.". Problem is, "best practices" is large body of rules-of-thumb which are almost always context dependent and often contradictory. Often they lead to worse code overall when applied outside of the relevant context. So I would reformulate as:
Only reject a "best practice" if you understand why it was there in the first place.
Maybe I'm just being immature, though!
Otherwise some good points in the article. I'd say a good sign of maturity is that you understand you job isn't to write code, your job is solving problems for the business.
I’ve come across junior colleagues who latch on to some "best practice" that they read/heard about, then preach it to anyone who will listen and aggressively debate it to death, instead of actually getting any work done.
The solution is to ignore these "my thing is objectively better than yours in 100% of cases" debates - choose the pattern/design/paradigm/thing with the least compromises and do something productive :-)
Seniors can make the same mistake - I had one literally senior colleague argue that he's been coding for decades longer than I have (I was only coding for ~15 years at the time) and that I should just do as he says, but would not tell me why, or how that applied to the situation.
Definitely. I even had the previous lead developer at $work love to say that things were best practice without really giving much justification - it's fuelled my quest for 'why?' a great deal :)
No,your skepticism is warranted and I'll back you up here. "Best Practice" is a common excuse I hear from juniors to justify cargo cult coding. The example of "testing" being evoked a "best practice" is strained. Testing is not a "best practice", it's a basic step in software design where you have accountability for results. It can be done wrong.
Even testing is a nuanced argument. The application I’m working is one of the few exceptions to skipping testing. I have basic tests around the data import and export classes but everything else is manually tested.
The reason? It’s a small, rapidly-built prototype hooking two poorly documented systems together while the company makes daily adjustments to the manual process I’m automating.
Everything is in flux because we have just started working with a new vendor that will significantly improve our logistics. All of the chaos is deliberately considered and discussed weekly. (I’m in all of those discussions.)
It’s really beautiful to see this all come together. Once we’ve figured out how this will work, it’s time to firm up the internals and nail it down with tests.
Even if that day never comes, the application has extensive logging (~75% of the application code is devoted to highly detailed logs) and production errors will quickly be detected, with enough information to manually fix a bad import or export.
Once edge cases do come up, you can bet I’ll write tests to prevent regressions.
"Best practices" are often just "common practices". In other words, this is how we do things here, adapt. Nobody ever investigated how "best" they really are.
It's actually the more senior devs who go beyond and investigate how things could be done better.
I agree that certain standardisation of practices need to exist. I would rather work with someone who calls even imperfect practices as "standard practices" and had an initiative on the side to explore better options vs working with someone who insisted on "best practices" because of their insecurity and shaky authority.
Zealous adherence to "best practices" is actually a good sign of immature developers.
From my experience, it is usually the expert beginner developer that takes up religious ‘best practices’ discipline. This is roughly around the time they are about to get over the plateau and hit 6-7 years of experience.
The identity begins to attach to things that fit that timeline, as in, ‘I am roughly here and therefore based on how I see myself, I should be aligned to what I perceive my identity to be’. This realignment means they eat up any and all bullshit they see at a developer conference or resort to appealing to authority (‘this is how Google does it’). And why wouldn’t they, they are in their eyes almost there, almost like the best.
Ultimately, this is still sincere, just leads to dumb results at times (eh, more like often). It is a form of establishing your own authority by associating yourself with authority, a coopting of something undeserved, but emotionally fulfilling. You feel like 6-7 years is enough, but it just ain’t. In it’s worst form, this is what it is and it’s pretty annoying to deal with.
At the start I followed design patterns religiously. I was proud that I'd use those patterns. I wrote about them. I told employers. They employers were impressed. The code was an over-engineered mess. But it ticked the best practices box.
It still happens now. But this is mainly because management and other developers are enthusiastic about various techniques that have the label "best practice".
I last left a codebase in an international firm where everything was a dependency and everything was injected and everything was orchestrated. Everyone was enthusiastic that they finally had a codebase with dependency injection everywhere. I hope to never see that codebase again.
I noticed this as well with myself and others. At 5 years you become an expert in whatever stack you choose. Whatever best practice you've adopted becomes religion. It isn't until year 6 or 7 or 8 when you start realizing you know less than you thought. By 20 years realize you know nothing and you have reached developer enlightment.
Coming to terms with the business side is the first step.
Having your best practises or framework mocked by the next generation of zealots is part of the process. Patterns you know and love become anti-patterns only to return in a distance future in some limited formed with great fanfaire.
Letting go and a realizing your project / patterns will get hijacked by the business side, other developers and often enough pms is freeing.
The problem is not so much the individual rules, but the "black and white thinking": believing that everything in programming (or life) can be distilled to very simple rules, without nuance or prior thinking before applying them.
It's a very comfortable position to be in, but it severely hampers your professional development.
Exactly, also the difference between telescopic vs microscopic work. Code needs to be clear is a telescopic view. Your app needs to run 200fps is more specific, microscopic. A game might need to, a text processor might not. Circumstances dictate circumnavigation.
Maturity (or seniority) is a form of growth and it has stages. I would call this article Stage 2: Do the approved thing.
It's an important stage. But people who stay in this stage too long become intolerable technical architects who yammer away about best practices and high level company goals, produce technical documents no one reads and plan team building events no one wants to attend.
To me stage 3 involves thinking critically about tradeoffs. Applying best practices in relevant contexts and rejecting them in others is a good example.
Completely agree. Even further, the biggest problem of this list is that the items can fight each other. Your best testing strategy doesn’t save the company from bankruptcy if the deadline to have something done is next Tuesday. Honestly I kinda saw the first item in some of the authors guidelines. Not everything works for everyone else, that includes #4
It also contradicts the other point about "being a tech zealot." Is memory safety a best practice? Some would certainly say it is. But what if a colleague wants to use C?
I agree, in software you are often designing custom things that need to take a subset of principles from many contradictory high level best practises.
Best practises are negotiable if you can attack them with first
principles. This is the basis of critical thinking, which I think is more important than best practises.
Best practises will get you fast social agreement though, whilst critical thinking will upset many people because you challenge their idea's trying to find issues early or better options.
> Only reject a "best practice" if you understand why it was there in the first place.
I'd agree with and extend your remarks, if you can clearly document why you're creating your own best practice, then its OK. Or rephrased in a negative sense, If you can't clearly write and permanently document an explanation, then you probably don't have a good reason.
Following Best Practices to the letter without knowing why is fine if you're a student or a junior developer, or if your work is not much off the beaten path.
But as soon as you're doing anything more complicated than stringing libraries together you have to know what you're doing and why you're doing it. You have to know the tradeoffs and benefits. This means discussing and yes, negotiating what are considered "best practices", and sometimes rejecting them if they're getting in the way of having a good product.
At my first job I assumed that the seniors at the company didn't know what they were doing since we weren't following certain "best practices". It took me a year or two to realize that there were good reasons why we weren't following those best practices. Now when I want to follow a best practice, I envision a skeptical senior engineer in front of me and think how I would explain to them why I'm doing what I'm doing. If I can't give a good reason, I stick with whatever is simplest to implement.
Even if you are aware of the why, you need something (wisdom, extensive experience, etc..) to be sure when to enter into confrontations whether or not a particular best practice should or should not be used.
But I fully agree with the rest of your comment, the more decades I spend programming, the more I realize that the answer to pretty much any question related to SW development should start with "It depends. ..."
> I'd say it's more likely sign of Dunning-Krueger.
Yep. I was thinking of the same thing when writing other comments. It's very comfortable to "believe" one has the perfect answer to an entire class of problems, but it's never the reality.
The way I would state it is that you need to understand the tradeoffs of given best practices and whether those tradeoffs justify implementing those best practices on your project. It also gets into what best practices should be prioritized over others and on what timeline.
i think there is more nuance to this. If you know what the best practices are in general, you understand them and have a really good reason to reject one or two of them that’s one thing.
If you reject everything because “you know batter” that’s a totally different story.
A lot of this is common sense stuff, but it might be better to replace this part with "Write tests." TDD is the kind of thing that sounds great in a blog post, but doesn't actually seem to work in the real world. At the very least, TDD is not a benchmark of a "mature software developer."
While not always applicable, I've found probably 80% of the time I feel “I should write a test”, I often a sign that I should leverage the compiler and model the data better in a way that doesn't allow the invalid state I would want to test. If you don't throw out some of the non-critical tests after your code is working, you're likely incurring a lot of tech debt in maintaining all of your tests that aren't really helping anymore.
TDD seems like something that exists only in theory. Every place I’ve worked, from startups to F500s, has talked about TDD but has never implemented it. I’ve been in the industry over a decade, maybe it’s just me.
What world do you live in that TDD does not work in? I definitely do not do TDD all of the time, but when I do it works absolutely perfect. I'd pay any employee a 20% maturity bonus for applying TDD consistently.
People are all getting mad about the "best practices" bit as if it's not just a fact. Does it hurt that much to admit that maybe you're not always the mature developer you'd want to be? I sure am not, I always get carried away writing cowboy code, maybe writing tests after, maybe not because I'm in a hurry and we can stand a bug or 2, and the CI/CD will catch any syntax errors anyway.
The rules are simple though. You determine the best practices, you apply them, and your code will be more correct, it will be written faster, you will spend less time being confused, your final product will be closer to the specifications, and it will have less bugs.
How could this be contentious? No way there's anyone out there that truly feels they wouldn't push less bugs to production if they always could and would apply TDD.
And so I have noticed teams and orgs which go for eternity without seeing effective examples of TDD in practice. Is there any resource which might help people like that (includes me?) something that shows TDD in real life scenarios maybe.
There’s a world of difference between writing tests for a rest api endpoint vs other types of pesky code. Too big a difference really. Writing a bunch of tests for your utility methods like a DateFormatter is obvious, but if you shackle up other abstract implementations with tests and strict type checking, you can easily make a piece of unalterable code (maybe that’s the point). The question is, what is the value in doing that? Why shackle up all parts of your codebase when only a few parts are critical?
I’ll admit that I also lack (and have lacked) guidance on testing.
I wouldn't know what resource to point without having a lot more context. And even if I had, I don't feel qualified to properly give you a diagnostic.
All I can give is my anecdotal evidence. Most of our development teams apply TDD most of the time. We're working on a mature (5y) enough codebase to have a stable foundation. We do lots of integration testing and our testing is not without issues (it's slow, some of it is flaky, lots of test fixtures that are a pain to manage) but those issues do not come from TDD and are being dealt with piecemeal.
The only downside of TDD I've personally witnessed in practice is tendency to write superfluous tests. They either don't test anything useful (e.g. things already guaranteed by types) or leak implementation details. But we encourage developers to improve those tests and the engineering team is overwhelmingly positive that TDD has been a net win for productivity.
We try to deploy as much pair-programming as we can and the TDD methodology is specially improved by this. It helps dealing with loose ends from the task description or the technical kick-off and double-checking if the task acceptance criterion makes sense. If one developer of the pair has to take off halfway through, the most important bit is already agreed upon and the code review is much easier later on. Or if the pair-programming starts halfway through, the arriving developer can read the tests to catch up and help make the other tests pass.
There may be a domain where it makes a lot of sense, but I've rarely encountered it. Writing and maintaining test code takes just as much time as any other code. You often encounter problems during the actual implementation part that challenge your preconceptions. So if you write the tests first, then you'll end up rewriting them many times. TDD also encourages you to write junk tests that provide no safety, which will then need to be maintained and fixed at every refactor. But the biggest problem is that others on the teams simply won't do it, and neither will you when a deadline looms near. TDD aspirations don't last very long in production.
The one place where I think it works well is implementing a well defined protocol. If you have exact definitions for inputs and outputs, then writing the tests first can make things a lot easier. But it's rare when you have such a clean problem to solve.
I’m not OP and do not want to state whether TDD works or not, but in my experience it would increase development work significantly on apps which have complicated state machines as their business logic. It would probably end up being less work overall, when we factor in all the bug fixing work, that would happen after such an app would’ve been released without TDD.
TDD makes the assumption that logic is modular and that correctness can be evaluated on small subsets of code in isolation. This is not actually true in some important software domains if you are designing the code correctly. For some software, behavioral correctness can only be evaluated in the context of runtime side effects from other parts of the system. Correctness is a property of the aggregate system, not its components. In these cases, most of the testing is high-level integration testing and writing proper tests is often sufficiently sensitive to implementation detail that you write them when the code is mostly complete so that they match the implementation detail.
As an example, any data infrastructure software that uses non-trivial adapative scheduling to optimize concurrency and throughput famously has this property. It is also the primary reason things like database engines are effectively untestable until the code is nearly complete. In principle you could architect any software to be amenable to TDD, but for these cases the performance penalty implied by that architecture is so severe (literally on the order of 10x) that no one serious would design their software that way.
In these cases, the same binary can pass testing on one machine and fail on another. It is a different discipline than classic TDD, and more critical as more software systems are implicitly distributed.
<cynic>
Signs of an Immature preacher.
Define a bunch of rules that sound good.
Ignore contradictions between them.
Write blog posts and books.
Live off the proceedings of the cult you built.
</cynic>
TDD is PITA and I do not know any one soul that actually follows it. But it does sound good.
Best Practices are a dozen a dime and if anything they can serve as to identify which kind of books the team lead once read.
I could go on but I'm seriously bored by this type of posts/attitude. I only commend because nowdays they seem to come disguised as "pragmatic".
There are some valid points in there but there are zealot red-flags as well. Funny enough 'no zealots' is one of the points.
I am dealing with this now, having taken over a project from someone who thought this way. Yes it works, but is full of bad error handling, cryptic exceptions, race conditions (particularly TOCTOU bugs), non-atomic database updates, and more.
"But it works" they say, which is true. But a simple SQL query shows duplicate or missing data where there shouldn't be and other violations of basic assumptions.
Looking at code and knowing what could possibly go wrong is an extremely useful skill. Unfortunately it usually takes running head first into those issues to actually learn and comprehend those lessons (which I have done more than my fair share of :))
I have also seen the other extreme. Where people are overly paranoid about the app throwing any exception and try to capture everything that could possibly go wrong until the heat death of the universe. Handling every error somehow, often without understanding what is supposed to happen in this case. So you end up with things being in weird states and secondary effects for what would have just thrown a fairly easy to troubleshoot exception otherwise.
While I can agree with some of the points made while gritting my teeth, the way this is written somehow puts me off.
It feels as though the author has taken an authoritive, slightly condescending stance towards his readers. The message it sends is that the author is out to invalidate his readers, judge them, not take their hand and teach them. I assume the intention was otherwise, but it is not presented as such.
It reeks of a junior with strong opinions rather than an old software jedi that intends to teach us the mastery of the art.
No matter who gives this observation, the presentation will always have a patronizing tone.
It’s really the discussion here on HN that’s curing it, which is fine. Someone had to get us the meat before we could cook it. There’s a lot of consensus here and good counter points that aren’t all damning (half it can be chalked up to inexperience, or effort vs value debates, eg TDD, impressionable developers, etc).
I think this starts with the title, "Signs of an Immature Software Developer". This immediately puts the reader on the defensive, because nobody wants to be called or considered immature. If a junior developer were to read this and see some of their own mistakes, I think they would be unlikely to be in the right frame of mind to take the feedback onboard and try to address it. The article doesn't appear to be following its own advice, to hone your communication skills. I would suggest having a(nother) read of How to Win Friends and Influence People. A better title might be "5 Growth Areas for Junior Developers", with a corresponding flipping in phrasing of the subheadings (e.g. "Being a Tech Zealot" -> "Being Pragmatic", "Lack of Team Focus" -> "Team Focus").
It does have a whiff of ego in the style. It's already odd to have a 'company' domain show 'My Blog' on the home page.
Even to call the signs 'immature' isn't helpful, as it implies that a dev is not as mature as they 'should' be. An illustrative quote is "Immaturity is something that comes with age." Junior would be a more fitting description, as that's only referring to their current progress rather than a judgement.
In my experience (over 20 years) rules are meant to be broken. You can’t have a set of meaningful rules that can’t be misfits in some cases.
“Write unit-tests!”
- on a shell script?
- for my playbook?
- on that static html-css?
- on my docker config?...
You’ll learn that best practices are great guidelines when you begin. Once you’re seasoned you’ll know what’s right... that’s what make you senior. You don’t ignore them, you use them wisely.
> When changing the world or providing a service, languages/tools take the back seat. The founders of any startup don’t care what language you use to complete the goal.
As a founder, I have found that the tech stack choice does matter beyond just getting the problems solved.
When hiring, engineers have judged joining my company based on our tech stack. Not just about familiarity, some have outright said that it’s because they only want to learn the most popular and transferable skills.
When discussing strategic acquisitions, it also matters. Interested buyers (ie for an acquisition) have asked about our tools and languages. We are more valuable if our languages and tools are compatible / common enough to fit into their organization.
To me the biggest one is not understanding and solving the actual problem that needs to be solved. Instead people will solve problems that the business doesn't have, or technical issues that doesn't exist. Or just some mildly related more fun problem instead.
Lots of critical nuance is missing, making it sound like it’s written by a “happy-path” eng manager.
If your “mature engineers” are business- and team-focused (read: doing what they are told), disregard innovations in tech and evolution (and evangelism) of best practices, you’re gonna have a boring team that will eventually get disrupted either from outside or inside of your org.
Business-oriented means "we do realize that the money don't magically materialize out of thin air and that anything that keeps customers happy/helps attract more is significantly more important than any code purism nonsense."
> business- and team-focused (read: doing what they are told)
"Doing what they are told" strikes me as the antithesis of being business and team focused - particularly the former.
A non-business focused engineer is mindlessly implementing the solutions the business asks for, without taking the time to understand the problem, motivation or mental-model of the business.
A non-team focused engineer is narrowly implementing the tasks that end up assigned to them, with no interest in what the rest of the team is doing, and not engaging with the wider team to challenge and contribute to what they are doing.
Publicly reprimanding past, present and future colleagues in order to boost your own standing by setting goal posts may be a bit of red flag, for my money at least. Writing about specific issues, with examples, to teach a general-rule, ideally in a humble way, rather than setting goal-posts, would be more productive.
6. Posting click bait titles to your programming blog.
Edit: here's a real critique.
> Best practices are non-negotiable
Have you ever argued about which practice was best? I have, many times. They are endlessly negotiable, and there is no canonical list of best practices.
They jumped out at me too. In software specifically, I’ve seen a lot of Best Practices come and go over the years and they’ve often contradicted the previous ones. Plus there are situations where I can, with good conscience, say “best practices say we should do X here, but because of reasons A, B, and C, I think we should do it this way instead.”
The real trick is a) getting your team to agree on what practices to follow, and b) agreeing that if you’re going to deviate from them that there’s an expectation that you’ll need to justify that.
“I did it this way because it’s the first thing I thought of” vs “I reasoned my way through this and this is why we should do it this way” are very different situations, even if they result in the same code.
This is only good advice if you're solving very simple, very trivial problems, or if you don't have enough experience. (That, or if you redefine best practices to tautologies like "always test" or "always write good code").
For anything more complicated than the most trivial of software, it's important to deeply understand your best practices and if they make sense to each situation or not.
but why should they?
For example, you may have a rule that you need CI to pass to merge a branch to your trunk.
But maybe CI is slow and flaky, and a business test fails on a PR where you've only changed the README.
Sure, you can say "well, the rule is to only merge with green CI" and restart the failed job, but you can also think "well, I can merge this".
Maybe it's ok, maybe not, but I think you should trust your fellow developers with doing the right thing rather than berate them "you didn't follow the rules!"
On the contrary, I think that a sign of a mature software developer is understanding why particular "best practices" exist, and when it's appropriate to reject them.
WRT the code zealot listing, there is a problem with non-idiomatic code.
For example most modern languages have list comprehension and some kind of built in map-reduce on a collection built in for free, nicely debugged, etc. Once in awhile you'll run into noob code where someone writes their own long elaborate and buggy routine full of if/then and goto to slowly iterate a pointer thru an array, sometimes skipping the first or last element of the array, etc. But if all an immature dev knows is "if then and goto" and not that fancy list comprehension stuff, you can get some crazy looking code that can be optimized, sometimes, down to one idiomatic line of code.
There are of course other examples of non-idiomatic code. Now is DRY don't repeat yourself idiomatic or a best practice or both or ?
I have seen a Java code where a "senior developer" (a guy hired a few months before me, therefore considered expert by the management) reinvented an associative array. Two lists (one for the keys, one for the values), with methods to add an item and get an item. Not even encapsulated as a new type, but a separate pair of lists and pair of methods for each associative array... there were dozens of instances in the code.
On the positive side, fixing bugs in his code was often easy. First I refactored hundreds of lines of code into ten lines, then I added the missing "if".
The code can have the least amount of technical debt, be written in the most simplistic way and be abstracted enough to allow the business to evolve but if the business runs out of money nobody cares about the rest because it won't matter.
Staffing and code maintenance are a business problem but I see software developers early in their journey put too much weight on these compared to bottom line value to the company.
There are always extremes, of course. But I find that a large share of technical debt could have been solved from the beginning with a little bit of foresight and planning.
Of course stakeholders often have a way of upending your beautiful design, but one of the core tenets of agile is to design your code to be adaptable. Done upfront, the cost is usually minimal but can completely save you when your client asks for the dreaded "small change".
So maybe the right balance then should be acknowledging the correct value of code. Don't focus on code, but also don't neglect it. Don't put it over bottom line value, but also don't ignore it.
I have seen inexperienced developers and managers -and also developers with a lot of wrong experience- sink enormous amounts of time/money into maintaining extremely bad code without even noticing where the cost came from.
> if the business runs out of money nobody cares about the rest because it won't matter.
"Business problems" aren't constrained here to "startups". Most business concerns are things that already exist in successful companies, because they are in business.
Only when software is the product, when not, it doesn't really matter as long as those groceries keep being registered on the sales inventory platform, the trucks get loaded with the right amount of goods, or whatever is the actual purpose of the business.
In such cases the IT department is most likely a couple of managers for external contractors delivering projects organized per budget packages.
The linked website is awful, has like two blog posts ever, no author byline, nothing. But the title baits another eng. dick measuring contest, let's go HNers!
I think your list is probably the right list for getting a high paying job but pretty wrong for being an effective developer. I think that's what you were saying in a cynical way anyway.
Only reject a "best practice" if you understand why it was there in the first place.
Maybe I'm just being immature, though!
Otherwise some good points in the article. I'd say a good sign of maturity is that you understand you job isn't to write code, your job is solving problems for the business.