IMHO TDD, like a lot of the agile stuff, is a good idea with solid foundations that people get wrong all the time and end up making things worse with.
Agile was supposed to ease up on process and make teams adapt to changing requirements. It wasn't supposed to use up >30% of your working time just to service the methodology, but that's what it ends up doing when you get in the Agile Evangelists.
TDD was supposed to ensure more correct software at the cost of some overhead (perhaps 30%?) by making sure every unit had its tests written ahead of the code. In practice I've seen it kill productivity entirely as people write test harnesses, dummy systems and frameworks galore, and never produce anything.
A combination of these two approaches recently cost an entire team (30+) people their jobs as they produced almost nothing for almost a year, despite being busy and ostensibly working hard all year. We kept one guy to deal with some of the stuff they left behind and do some new development. When asked for an estimate to do a trivial change he gave a massive timescale and then explained that 'in the gateway team we like to write extensive tests before the code'.
The only response we had for him was 'and do you see the rest of the gateway team here now?'
IMHO TDD, like a lot of the agile stuff, is a good idea with solid foundations that people get wrong all the time and end up making things worse with.
I think the reason it can go sideways is because the advocates and casual adherents don't accept that for some teams a methodology really may not provide the claimed benefits, or it costs too much elsewhere. It's easier to say "It's not [methodology] that isn't working, you're doing it wrong."
That's a really attractive answer, made all the more tempting because it's sometimes true. But not every software team is the same, operating in the same constraints. People tend to generalize based on their own experiences. Lessons learned about what works for X doesn't necessarily apply to Y. The thing that has been lacking in the TDD/Agile/Insert Fad is a higher level "this is why the ideas worked for us, here are the component pieces and their purpose, this is how to determine the pieces to adopt and how to tailor them to your organization."
You could say that someone out there is making that case, but their voice is drowning under the snakeoil salesmen. I don't hear it up front, the rare times I do hear it is deep into the "you're doing it wrong" conversation when any sense of perspective in the discussion has already been beaten to death.
I completely agree. I think these issues stem from our industry's tendencies to abstract the problem away from the solution before teaching the solution. This is the wrong way to teach.
If you show other developers how you solved your particular problem (how your solution came to be) and explain why you used that approach, many will quickly be able to discern how and when to apply your solution to their particular problem area. If instead you just give them a solution, they're missing a piece of the puzzle and may arrive at the wrong conclusion.
Other developers are also pretty intelligent folk - don't do your deduction for them. Let them do it themselves.
I think our industry suffers from the lack of studying and describing the history of concrete software systems. It could be done so easily now - we have the entire source code history available, we have the history of issues. Books could be written about the evolution of popular software systems - especially open-source classics (e.g. emacs or the linux kernel or Firefox). Those books would describe the problems encountered during the development and their solutions, the design / architectural decisions, as well as the evolution of the development process.
Studying software history should also help us not reinvent the wheel badly. If we studied history perhaps we would've known more about NLS or Smalltalk or why other ideas were invented. How many of us know why the concept of objects was invented in the first place? How many know why classes were invented - what particular problem prompted someone to invent them, and what the circumstances were at that point in time?
The reason why this history is so important is that when inventions go through the reuse process, they're never quite that perfect at solving those newer problems compared to the original problem. Therefore the further away we are from the original problem, the less likely we will be to understand the general aspects of the solution or to re-use the original thinking process to adapt the solution to our needs.
Finally it will also allow us to gain at least some of the knowledge that can presently be gained only from experience and mentoring - which is always a good thing.
The emphasis I take away from your story is that something may be a good idea, even foundational, but there's no guarantee your team will interpret or execute it correctly.
If you do the math on how much time should be taken up by Agile meetings, it's about 10% of the team's time. Add another 10% for vacation, illness, town halls, dentist appointments, etc. that leaves 80% to understanding requirements, writing code and testing code. Yet the market for "agile consultants" is one of lemons - you don't know if this guy/gal is a huckster or truly working for your success.
Similarly for TDD, I've never heard of TDD being about a team writing tests em masse before code - that wasn't something Kent or Bob Martin ever recommended anyway.
Ultimately this is why I believe the most important roles on a software team are the "management roles" - Product Owner first and foremost. any solid Product Owner I've worked with would have mandated a demo after every iteration and ejected the software team management very quickly if there were no results. Better to punt the problem child early instead of taking the whole team down later!
> IMHO TDD, like a lot of the agile stuff, is a good idea with solid foundations that people get wrong all the time and end up making things worse with.
So one could reasonably suspect that people who "get agile" are just talented and would be good developers anyway. Occam's razor invites us to assume that agile has no effect. Are there any scientific studies on the effectiveness of agile (or TDD), or is this just a homeopathy situation?
If you're going to ask that, you should tell us which studies you used to select the other techniques you use.
(It turns out that there are, tons actually, but software engineering a difficult topic to study and I wouldn't draw any conclusions from them. The vast majority of controlled software engineering studies I've seen are conducted on students using problems of trivial scope. The uncontrolled ones aren't much better. Basic questions like "what do you mean by productivity" have yet to be answered well.)
Are there any scientific studies on the effectiveness of agile (or TDD), or is this just a homeopathy situation?
Yes (and yes) on TDD. None of the ones discussed in Making Software: what really works and why we believe it were done particularly well, and the less bad ones tended to show no effect or contradictory mixed results (ie, disagreed on what went better vs what went worse).
"Agile" is far too ill-defined to actually study usefully. Specific variants or individual practices could be studied.
nonsense. I'm not an advocate of large process by any means, but a number of principles of the agile movement can benefit developers of any skill level or experience. and more importantly it can help a team collectively more than it can help the individual. the trick is imposing these principles carefully, which almost every agile leader I've met fails to do.
I've never seen anything wrong with more communication between stakeholders and adapting a solution to meet their ever-changing needs.
> I've never seen anything wrong with more communication between stakeholders and adapting a solution to meet their ever-changing needs.
Surely there's a point where more communication starts being detrimental. Reducing it to the absurd: if you spend 100% of the time communicating then you have no time left to actually build the thing. So there's a trade-off, as usual.
It could also be argued that ease of communication to deal with "ever-changing needs" encourages more superficial requirements and less deep thinking about the actual problem, leading to wasted effort and lower quality results.
Maybe you are right, maybe the above paragraph is right. I don't know and neither do you. Replying "nonsense" is not really an argument.
I've never seen anything wrong with more communication between stakeholders and adapting a solution to meet their ever-changing needs.
There is an effect where people who are invited to second-guess themselves become less happy with their initial decisions. There is another effect where cost or price or effort or social-standing is associated to quality.
If you give the impression of falling over yourself to serve the whims of your stakeholders, you're doomed. If you set reasonable limits... well that requires experience to do properly, and that same experience could be used to twist a more traditional methodology into something useful.
Perhaps agile has a stronger sink-or-swim learning curve? You either learn quickly and succeed very well, or you never figure out what went wrong.
"'in the gateway team we like to write extensive tests before the code"
Which, ironically, is the opposite of TDD. One advantage of classic TDD unit testing is that your tests grow with the code. One of the dangers of not doing TDD is the situation above, where your integration tests require massive scaffolding and custom frameworks upfront, essentially turning the process into waterfall.
I've never seen a project fail because of too much testing. Maybe because of an over-emphasis on process, but not because of writing too many useful tests. On the contrary, projects I've worked on fail or approach failure because of lack of clear requirements, whether in unit test form, BDD, or well-written user stories. If it's not clear what a product owner wants then it's impossible to test and impossible to implement to match the owner's expectations. TDD is useless if you don't know what you're trying to build.
Too much testing, no. Too much time spent building test frameworks, simulators and 101 other things before a line of code is written? Well I just witnessed it last year.
Not that it was the only factor. Heavy 'agile' process was certainly part of it. They also threw everything away and restarted again at some point, likely due to changing requirements. But it was part of the picture that added up to nothing getting done.
Jesus, what a nightmare! You look back on your work and you've only produced tooling instead of solving the problem you set out to solve. An easy trap to fall into, but doing it for a year is something else.
It wasn't quite as bleak a picture as maybe I've painted.... but not far off either. Must have been pretty depressing for the team as well as for the folks running the show.
On the positive, I think they all got good roles with our competitors!
The problem is that it's entirely reasonable and common for a product owner to not know what they want. It's not some inconvenience that can just be ignored.
It's a fairly uninteresting problem if you can fully specify it before development starts.
Specification should be more like a conversation. That's the whole point of a lot of incremental and customer-focused development.
The problem is when you're under time constraints or technological constraints that don't allow for rapid iteration.
The discipline of writing tests tends to tease out design requirements fairly quickly. Because as soon as someone takes the time to think carefully about the implications of what the product owner wants, it usually results in a feedback loop eliciting further details. Like a conversation, yes, but one which results in documented, tested implementations.
>Because as soon as someone takes the time to think carefully about the implications of what the product owner wants, it usually results in a feedback loop eliciting further details.
I am not a TDD person, and that sounds pretty much what I do. Why do I need a load of tests to find out more details of what I am trying to implement?
The benefits of testability as a focus of requirements gathering and automated tests in the source tree as a concrete artifact of that requirements gathering is:
(1) If something is specified well enough to be automatically tested, there is clear agreement (and not superficial agreement hiding different interpretations).
(2) If automated tests are created and in the source tree, the unambiguous knowledge in #1 is preserved which makes resolving future questions of expectations and intent of the existing code base easily resolvable.
For most people the easiest way to see what using an API looks like is to use it. And the emergent design that results can often be better than what you would have designed if you tried to design by thinking about it without writing code. http://c2.com/cgi/wiki?WhatIsAnAdvancer
And writing an API takes up a small proportion of my time. Usually its generating database queries, and spitting them out as web pages, and presenting the data in a way that non technical users can understand.
If you're spending a substantial proportion of your time generating database queries or rendering results to web pages you should look at using better libraries for doing so, and/or improving your own library-type code.
If you're talking about figuring out how best to present the data, that's not really "development" per se. But it is another kind of API, and again something that you can design more effectively using a test-oriented approach (e.g. start by figuring out the use cases for what a user wants to find out from the information).
How do you know I am not using good libraries libraries for writing my code? (I usually use Django which seems to be well respected). Again, that has nothing to do with test driven development.
I already start by figuring out what the user wants. Again, nothing to do with writing a load of tests first.
All your suggestions for how TDD will improve my development seem to point to exactly what I do already without writing tests first.
> How do you know I am not using good libraries libraries for writing my code?
You said "writing an API takes up a small proportion of my time. Usually its generating database queries, and spitting them out as web pages...". Which suggests to me that you're not using libraries effectively, because those things are a tiny proportion of my time. The business logic - i.e. the part that's actually specific to your problem - should be where you spend most of your time, and that's the part where TDD is effective.
> I already start by figuring out what the user wants. Again, nothing to do with writing a load of tests first.
Writing a test for a use case ensures you actually understand it. It helps you find more possible problems or misunderstandings in the same way that a blueprint is an advantage over a sketch. And it's probably the most effective way to communicate these use cases to developers you're collaborating with.
Wow, so without knowing very much about my work, you are able to tell me where I should be spending the majority of my time.
And how does writing a test ensure that I understand a problem any more than not understanding it? If I don't understand it properly I will likely write the wrong test.
> TDD was supposed to ensure more correct software at the cost of some overhead (perhaps 30%?) by making sure every unit had its tests written ahead of the code.
Not exactly. Remember that TDD came from Extreme Programming, and the radical idea of Extreme Programming was "embrace change:" the idea that you could accept—no, desire—requirements changes after you started programming.
At the time, all software design was supposed to be done in advance; to do it any other way would lead to madness. The (fictional, it turns out [1]) "cost of change curve" said that a change in requirements would cost 20-150x as much if made after coding began, and thus all requirements had to be nailed down in advance.
XP said, "what if we could flatten the cost of change curve, so that the cost of a change is just the cost of implementation, regardless of when the change is suggested?" That's the whole raison d'être of XP.
The cost of change curve was flattened by using evolutionary design. The way you got evolutionary design was with four practices: pair programming (to improve quality), simple design (to avoid painting yourself into a corner), refactoring (so you could change the design), and... TDD. So you could refactor safely.
TDD is about enabling change. The quality benefits are also valuable, but not the main point. That's why TDD'ists care so much about fast tests—you need quick feedback when you're doing design refactorings.
[1] Laurent Bossavit investigated the literature for the source of the cost of change curve claim and determined that it was based on people graphing their opinions, not empirical data. Over time, those opinion graphs were assumed to be based on real data, but they weren't. https://leanpub.com/leprechauns
> Agile was supposed to ease up on process and make teams adapt to changing requirements. It wasn't supposed to use up >30% of your working time just to service the methodology, but that's what it ends up doing when you get in the Agile Evangelists.
Well, I've worked in a (non-agile) environment where the methodology ate up way more than 30% of our working time. If an Agile Evangelist could have gotten us to 30%, most of us would have been ecstatic. (It wouldn't happen, though - we were FDA regulated as a medical device manufacturer, which imposed huge overhead requirements.)
30% overhead is often a function of team size, not methodology. A team of 30 is not likely to be agile in a meaningful sense, there are too many coordination vectors and communication channels.
And if those in a position to ask about the rest of the team aren't sold on agile to begin with, the odds of it working are inversely proportional to the odds of people just going through the motions while fearing for their jobs and polishing their resume for a year.
There were various sub-teams that had their own working areas and their own sprints, it wasn't one huge 'agile' team of 30.
>> And if those in a position to ask about the rest of the team aren't sold on agile to begin with, the odds of it working are inversely proportional to the odds of ...
So we're agreed, it's not a silver bullet. It might work, it might not, and people pretty much have to be believers to get any benefit out of it.
I once asked an experienced developer what hr thought about Agile and TDD. He responded by saying that they are useful tools, when used by people who know what they're doing.
There's no replacement for working with quality people, and no tool prevents you from being a moron.
I've seen it said here before - you can't expect a mediocre team to become world class by forcing them into a set of methodologies, you'll just get people who are mediocre at doing it that way too.
Fallacy of the grey. Yes, a good team will be better than a bad team for any reasonable methodology. That doesn't mean there aren't methodologies that are better than others.
Agile was supposed to ease up on process and make teams adapt to changing requirements. It wasn't supposed to use up >30% of your working time just to service the methodology, but that's what it ends up doing when you get in the Agile Evangelists.
TDD was supposed to ensure more correct software at the cost of some overhead (perhaps 30%?) by making sure every unit had its tests written ahead of the code. In practice I've seen it kill productivity entirely as people write test harnesses, dummy systems and frameworks galore, and never produce anything.
A combination of these two approaches recently cost an entire team (30+) people their jobs as they produced almost nothing for almost a year, despite being busy and ostensibly working hard all year. We kept one guy to deal with some of the stuff they left behind and do some new development. When asked for an estimate to do a trivial change he gave a massive timescale and then explained that 'in the gateway team we like to write extensive tests before the code'.
The only response we had for him was 'and do you see the rest of the gateway team here now?'