I've worked three different places where I attempted to implement automated testing strategies, failed twice and succeeded once. If you subscribe to the anecdata model of knowing things, here goes:
In the first company, there was a strong culture of testing but no strong culture of teaching. I did not last long there and I did not succeed at implementing even basic automated testing. Everyone was very busy in their own roles, and as a co-op employee nobody would show me how to test. I was a pre-graduated Computer Science student who honestly didn't know about unit testing frameworks, or selenium, or whatever. If you give me a giant Waterfall document about requirements, and a giant spreadsheet to fill up with naughts and crosses, with little to no additional direction about the software, how it's tested, or how it even works, then you're going to have a bad time.
Second company there was a strong culture of quality, but not of testing. We were also a two-man developer shop, so there was very little time for teaching and testing. I was expected to learn on my own, and avoid spending time on learning things my boss already knew on our behalf. I accepted broken code from him all day long and made it work.
To be honest, that's where I learned to do good work and not break stuff, and we never invested heavily in test suites. We also almost never built anything above-average in complexity, and when we did, it actually wasn't very long before the boss left, and I was on my own to support it. In the next few years that guy wrote a book about how to dig out from this situation, when your software is successful and needs to change, but doesn't have any tests.
(He says it wasn't a very good book, but from my perspective it's something that was meant to be read preventatively, even though it reads more like a step-by-step manual, you should hope that you never have to follow these terrible, terrible steps. If you are starting a new project and still have a chance to keep test coverage at acceptable levels as you go, I'd maybe recommend reading it, so you know what you're in for if you make the bad decision and your software becomes successful anyway. I have a coupon code if you really want to know, but I digress... the short version is, you've gotta test everything before you change anything.)
In this last role, I have succeeded at implementing automated testing. (But at what cost?) The company supports the idea of spending time on testing. My direct supervisors all were willing to wait the extra week or two to see what I came up with, and understanding the benefits of testing, in retrospective it was always considered time well spent. Nobody was really in a position to teach, but fortunately I had tons of experience at trying and failing before, so this time with the right support structure I was able to get it right for the most part on my own, with help from docs and the internet. (It helps a lot that browser testing tools and other testing tools have all evolved a lot in the last 10 years too, they are objectively better now than they were when I had that first job, and no support.)
In summary, I'd say it's necessary to have all three - time to learn, actual support from above for delays when "this seems like something that shouldn't take this long" ultimately appears, and an actual operational need to build automated testing, which is not always granted depending on your team size, design, and need for growing complexity.
It is possible to build a widget that works, and never changes again. In this case, spending time on a test suite may be a waste. I have found as I've grown more successful and work with more people, that it happens a lot less often than it used to.
Thanks for the insight, in my last role I attempted to introduce automated testing and had support and some success but no buy in from the rest of the team meant I ended up 'owning' the test suite.
The best suggestion I've heard recently is, when someone writes a flaky test, that person needs to be the one who fixes it. (If you write flaky tests and I fix them, I learn how to not write flaky tests, and you keep on writing them, blissfully unaware of the pain that they cause every day.)
If only one person is writing tests, that's a problem you won't have, but what's worse... I think you have it worse.
In the first company, there was a strong culture of testing but no strong culture of teaching. I did not last long there and I did not succeed at implementing even basic automated testing. Everyone was very busy in their own roles, and as a co-op employee nobody would show me how to test. I was a pre-graduated Computer Science student who honestly didn't know about unit testing frameworks, or selenium, or whatever. If you give me a giant Waterfall document about requirements, and a giant spreadsheet to fill up with naughts and crosses, with little to no additional direction about the software, how it's tested, or how it even works, then you're going to have a bad time.
Second company there was a strong culture of quality, but not of testing. We were also a two-man developer shop, so there was very little time for teaching and testing. I was expected to learn on my own, and avoid spending time on learning things my boss already knew on our behalf. I accepted broken code from him all day long and made it work.
To be honest, that's where I learned to do good work and not break stuff, and we never invested heavily in test suites. We also almost never built anything above-average in complexity, and when we did, it actually wasn't very long before the boss left, and I was on my own to support it. In the next few years that guy wrote a book about how to dig out from this situation, when your software is successful and needs to change, but doesn't have any tests.
(He says it wasn't a very good book, but from my perspective it's something that was meant to be read preventatively, even though it reads more like a step-by-step manual, you should hope that you never have to follow these terrible, terrible steps. If you are starting a new project and still have a chance to keep test coverage at acceptable levels as you go, I'd maybe recommend reading it, so you know what you're in for if you make the bad decision and your software becomes successful anyway. I have a coupon code if you really want to know, but I digress... the short version is, you've gotta test everything before you change anything.)
In this last role, I have succeeded at implementing automated testing. (But at what cost?) The company supports the idea of spending time on testing. My direct supervisors all were willing to wait the extra week or two to see what I came up with, and understanding the benefits of testing, in retrospective it was always considered time well spent. Nobody was really in a position to teach, but fortunately I had tons of experience at trying and failing before, so this time with the right support structure I was able to get it right for the most part on my own, with help from docs and the internet. (It helps a lot that browser testing tools and other testing tools have all evolved a lot in the last 10 years too, they are objectively better now than they were when I had that first job, and no support.)
In summary, I'd say it's necessary to have all three - time to learn, actual support from above for delays when "this seems like something that shouldn't take this long" ultimately appears, and an actual operational need to build automated testing, which is not always granted depending on your team size, design, and need for growing complexity.
It is possible to build a widget that works, and never changes again. In this case, spending time on a test suite may be a waste. I have found as I've grown more successful and work with more people, that it happens a lot less often than it used to.