Martin Fowler is in the certification for box checkers business.
99% of the people that read them work at places where they must "move to X" to justify some department. They will likely implement a simulacrum of X (usually by importing some java library someone wrote as homework), adding all the pitfalls and future problems of X with zero of the benefits of X.
Haven’t really read Fowler’s stuff, but I have read Martin Kleppmann’s Designing Data-Intensive Applications and that was helpful. Haven’t seen it mentioned here (though I haven’t looked thoroughly through the comments). Just thought I’d mention it here.
Agreed, Fowler, Martin, etc. are often criticized not because of their work but because of their audience, or more specifically their paying customers. Makes little sense to me, I got a lot out of their writing, especially in the early days of OO.
Fowler and Beck in particular have been massively useful to me recently. Refactoring and TDD by example are wonderful books, and completely changed my approach to software.
I also love Feathers and Working Effectively with Legacy Code, but that might be more of a niche taste ;)
Working Effectively is a bit niche, but unless a team is willing to rewrite from scratch or stumbled on a perfect design, refactoring and dealing with untested code will hit everyone eventually. Hell, test suites may themselves not be sufficiently preserved or remain fully functional after enough time has passed. One of my best time investments was reading that book.
It's been a long time, so I had to ponder this question for a bit.
In context, I was working on precisely the kinds of systems that Feathers is talking about when I read the book. His definition of "legacy" revolves largely around a lack of testing which makes changing the system very difficult (you can't be sure what you've done hasn't broken something else, slows you down to make a proper analysis, or you move fast and almost certainly break things). We had inherited a 500k SLOC embedded system for maintenance, with changes needed, but the previous developers had stripped out all tests when they shipped it. Literally, I found the directories for tests and the test framework and every file was removed. So even if the system wasn't legacy before, it was by the time it got to us.
While I learned a lot, the primary benefit was improving my ability to communicate ideas I already had formed or solidified those ideas in my mind. I had struggled for a long time (I've talked about this in other posts) with managers not seeing the value of spending time building out automated test suites or enforcing good architecture/design, and there's only so much one person can do. Having a more articulate argument was very helpful.
Things I believed before and were reinforced by the book: automate testing as much as possible, unit testing is generally a positive and enabled by good modular design, modular design is critical to keeping a system sustainable over the medium and long term.
I'm looking through the table of contents to make sure what I'm talking about next came from this book and not other readings.
Particularly useful: His discussion of the "seam" model. Which is, in other terms, related to finding module boundaries and dividing the system cleanly into those modules when it wasn't already. If a system wasn't designed with modularity in mind or the clean initial lines weren't maintained, then these seams can be hidden away or blurred. He discusses, early in the book, how to find this, how to refactor to separate these modules, how to develop tests for each module. Looking at the TOC, it looks like chapters 9 and 10 were probably two of the more useful ones for me ("I can't get this class into a test harness" and "I can't get this method in a test harness").
Also, if you're working on a clean slate system this book is still helpful as it can help you keep the pitfalls in mind so you avoid creating another legacy app. A lot of what it discusses is still present in other good books (particularly those with an emphasis on refactoring, like, well, Refactoring and OO/modular design), but in the context of trying to change/improve a system that doesn't have a good (present) architecture or tests to aid you.
One caution, the print copies have gotten worse over the years. Get a digital copy or an old print copy. My office had a printing with about 20 pages repeated on top of 20 missing pages (for example, page 140->141 is actually 140->121 or whatever the numbers were), and when I bought my own, much more recent, hard copy (which I think I left with a coworker at that office when I moved because it's not here at home) it had other, similar, issues. I think I had a whole chapter missing two half chapters repeated in its place. O'Reilly has it in their digital library which I have had access to through most employers so that's where I actually ended up reading most of it.
> Google has on purpose made Android different enough so that no Android SoC could run mainline Linux.
daily reminder that this is only possible because the very people in this site, "did not care about GPL or tainted kernel" as long as they had their nvidia GTX working to play quake.
decades ago i was following a kid's graduation project of a OS with full permanence design. that was before Android and other always on "computers".
it was called something like unununium (yeah, same name as the element, for extra hard-to-search points). I noticed the project when he suggested unununium-time as an alternative to linux time in a list was at, to fix some time skew problems... But what caught my interest was his vision that in the near future (remember, before android/IOS) computers would not care about offline data storage and a OS should be optimized for always-on and RAM only.
I can still find some of the assembly versions, but the fun stuff and interesting ideas showed up on a rewrite in python(!) and that i can't find anything any more.
the visionary aspect was that it didn't need new hardware concepts. Just drop the ephemeral ram (or treat it like L4 cache, so to say) and hope permanent storage gets faster, which it did.
so they can continue to hide from public that most of their revenue is from secondary use they monetize from their database: research & law enforcement (or anyone that wants to buy their anonymized database, that still seem to contain zipcode)
technically, every flash memory today is a cube (well, more of a skewed cube, a Parallelepiped)
Mostly because they can't figure out how to make things faster, or as fast as the interface which keeps getting faster, so they just pile a bunch of flash controllers on top of each other. Everyone is running stripped RAID-0 and don't even know it.
And if you consider that it also uses varying analog voltage values on each node (!SLC), it is arguably a 4d cube. Take that, 90s!
No idea what icon you are speaking of. I don't use Chrome much and don't log into it when I do. I certainly don't think anyone is going to pay for the privilege.
OMG the lack of understanding of basic ad tracking in this forum baffles me.
Nobody is going to pay. But if google serve an ad for a logged out (or firefox) user, they get paid 0.05c if they serve the same ad to someone logged in (via that button or otherwise) they get paid 1.7usd.
the button is just to make you be always logged in.
You are paying all the time and don't even realize.
You have 20 extensions that were "approved" by mozilla more than a year ago! All the others are banned forever.
You can install uBlock, but not uMatrix. Many many extensions are banned just because they reviewed ONE extension that they preferred for a certain usecase. For example, for OLED phone night reading, there's the most installed at the time extension "Dark Background and Light text" that offered many customizations and worked on reader mode, and more importantly redish text/links. But too bad, because Mozilla picked the one with NO settings, but a nicer icon (which is literally disney's darth vader, but who cares) called "dark reader" which only have bright blue links and white text (and despite the name do not work on reader mode), and because there is already one extension with "dark mode" they will never whitelist the others, or even provide a setting to enable it on your own.
> You have 20 extensions that were "approved" by mozilla more than a year ago! All the others are banned forever.
This is a ridiculous lie, Mozilla has said repeatedly that more extensions are being enabled as support for more extension APIs are added, and it has been true. Several of my extensions which were previously disabled have come back online after updates.
I checked this three days ago after reading the Mozilla blig post, and literally none of the extensions I wanted (except the ones already available on stable) were available. The list of available extensions, even on nightly, is very small.
Perhaps it will grow but I'll wait to see that before I believe it.
I'm still waiting very basic and popular extensions (uMatrix anyone?) for SEVERAL MONTHS.
And try to offer community help. Ha!
mozilla have to die for firefox to live.
uMatrix and uBlock share the same base code, and we have uBlock but no uMatrix... it is all so rotten. Almost like mozilla is actively breaking firefox.
I can't imagine the kind of Stupids that are steering mozilla, but they definitely want people to move to chrome.
They disabled many loved extensions by power users for absolutely no reason at all! after those power users spend years bending to all their capricious changes. moving to webextension? done. moving to a new mobile UI? done. But, they still want you to move to chrome no matter what.
We should take firefox out of their hands before it is too late.
>They disabled many loved extensions by power users for absolutely no reason at all!
The reasons were stated repeatedly. They rewrote the mobile browser engine, which broke extension API support since all of the internal APIs changed, and they didn't have the resources to support both browsers simultaneously for a long period of time, so they prioritized the most-used extensions first and will enable more extensions as the APIs are hooked back up underneath.
This had tangible benefits - the new browser is significantly snappier and uses less power in my experience.
>We should take firefox out of their hands before it is too late.
It's open source, if you aren't satisfied with the speed of their progress, you can always help out. You say you'd like to take this work out of their hands? Well, here it is.
These are, specifically, unimplemented APIs and known API bugs in the new Firefox Mobile, that are on the Mozilla TODO list, and for which contributions would presumably be welcome. Enjoy.
Unless when you said "we", you actually meant "other people".
One problem is that it doesn't work as simply as "all APIs this add-on uses are supported, so let's enable it" – they instead still insist on explicitly whitelisting every individual add-on. So a few really popular extensions are whitelisted and the long tail is left behind, even if it might work perfectly well or at least useable enough.
I have for example one small extension that I maintain, which is basically little more than a glorified page script which therefore doesn't use any special extension APIs at all. Despite that, it took months for it to be enabled, and if it wasn't as popular as it is I might still be waiting even today.
> the new browser is significantly snappier and uses less power in my experience.
Ha! From a cold start, on my phone launching the new Firefox (with less add-ons) and loading a page seems to take approximatively twice as long as on Firefox 68, and still ~50 % more than even on Firefox 55.
Postscriptum: To be fair, after investigating a little more it seems that the cold start penalty with the new Firefox seems likely due to fixing a bug that meant uBlock and similar add-ons were previously unable to intercept the first few network connections that were happening right during startup.
So on the one hand fixing that bug makes sense and has some value, but on the other hand and in practice the increased startup time still feels rather quite annoying, too, given that my phone isn't the latest and fastest model.
> The reasons were stated repeatedly. They rewrote the mobile browser engine
*this have nothing to do with my comment*
I am talking two or three iterations AFTER that. please, stop commenting long posts where you have no idea.
yes, mozilla moved to a new engine. Before that they moved to a new extension format. etc etc etc.
All the extension developers worked on the ports already, *THEN* mozilla, only for mobile, enabled "recommended extensions" which was fine. Until they DISABLED the non-recommended (not non-updated to the new tech).
It have nothing to do with technology. In fact I have many of those extensions working on my phone by fiddling with their whitelist urls. No problem running them AT ALL.
And progress is being made. Like I said, I've personally seen most of my extensions that were initially disabled due to incompatibility, reenable themselves after update.
One needs to make a mozilla account to complete the last steps and that's where I've been holding off. Mostly out of laziness and also from lack of necessity since I just moved on to other browsers in the mean time.
99% of the people that read them work at places where they must "move to X" to justify some department. They will likely implement a simulacrum of X (usually by importing some java library someone wrote as homework), adding all the pitfalls and future problems of X with zero of the benefits of X.