Even large companies are still grasping at straws when it comes to good code. Meanwhile there are articles I wrote years ago which explain clearly from first principles why the correct philosophy is "Generic core, specific shell."
I actually remember early in my career working for a small engineering/manufacturing prototyping firm which did its own software, there was a senior developer there who didn't speak very good English but he kept insisting that the "Business layer" should be on top. How right he was. I couldn't imagine how much wisdom and experience was packed in such simple, malformed sentences. Nothing else matters really. Functional vs imperative is a very minor point IMO, mostly a distraction.
Your advice is the opposite of "functional core, imperative shell". The FCIS principle has IS which is generic, to be simple, because it's usually hard to test (it deals with resources and external dependencies). So by being simple, it's more unit testable.
On the other hand, FC is where the business logic lives, which can be complex and specific. The reason why you want that "functional" (really just another name for "composable from small blocks") is because it can be tested for validity without external dependencies.
So the IS shields you from technicalities of external dependencies, like what kind of quirks your DB has, or are we sending data over network or writing to the file, or does the user inputs comands in spanish or english, do you display the green square or blue triangle to indicate the report is ready, etc.
On the other hand, FC deals with the actual business logic (what you want to do), which can be both generic and specific. These are just different types of building blocks (we call them functions) living in the FC.
FCIS is exemplified by user-shell interaction. The user (FC) dictates the commands and interprets the output, according to her "business needs". While the shell (IS) simply runs the commands, without any questions of their purpose. It's not the job of IS to verify or handle user errors caused by wrong commands.
But the user doesn't do stuff on her own; you could take her to a pub and she would tell you the same sequence of commands when facing the same situation. In that sense, the user is "functional" - independent on the actual state of the computer system, like the return value of a mathematical function is only dependent on the arguments.
Another example is MVC, where M is the FC and VC is the IS. Although it's not always exactly like that, for variety of reasons.
You can think of IS as a translator to a different language, understood by "the other systems", while the FC is there to supply what is actually being communicated.
I disagree that these two pieces of advice are opposed. I think they are orthogonal at worst, and in agreement at best.
"Functional core, imperative shell" (FCIS) is a matter of implementing individual software components that need to engage with side-effects --- that is, they have some impact on some external resources. Rather than threading representations of the external resources throughout the implementation, FCIS tells us to expel those concerns to the boundary. This makes the bulk of the component easier to reason about, being concerned with pure values and mere descriptions of effects, and minimizes the amount of code that must deal with actual effects (i.e. turning descriptions of effects into actual effects). It's a matter of comprehensibility and testability, which I'll clumsily categorize as "verification": "Does it do what it's supposed to do?"
"Generic core, specific shell" (GCSS) is a matter of addressing needs in context. The problems we need solved will shift over time; rather than throwing away a solution and re-solving the new problem from scratch, we'd prefer to only change the parts that need changing. GCSS tells us we shouldn't simply solve the one and only problem in front of us; we should use our eyes and ears and human brains to understand the context in which that problem exists. We should produce a generic core that can be applied to a family of related problems, and adapt that to our specific problem at any specific time using a, yes, specific shell. It's a matter of adaptability and solving the right problem, which I'll clumsily categorize as "validation": "Is what it's supposed to do what we actually need it to do?"
Ideally, GCSS is applied recursively: a specific shell may adapt an only slightly more generic core, which then decomposes into a smaller handful of problems that are themselves implemented with GCSS. When business needs change in a way that the outermost "generic core" can't cover, odds are still good that some (or all) of its components can still be applied in solving the new top-level problem. FCIS isn't really amenable to the same recursion.
Both verification and validation activities are necessary. One is a matter of internal consistency within the component; the other is a matter of external consistency relative to the context the component is being used in. FCIS and GCSS advise on how to address each concern in turn.
Glad to see that other people have come to a similar conclusion as I did. I don't understand why the industry hasn't been able to form much consensus around this.
You look at electronics and vehicles, the components inside are generic, many devices use exactly the same internal components... All the internal components are chosen specifically for their ability to handle a wide range of conditions (pressure, heat, electromagnetic interference) and also based on how broadly compatible they are with other components and tools... But somehow, when it comes to software, we treat it as if it's completely different.
There is a lot of value in using generic components which can handle a wide range of use cases. You want to avoid changing dependencies as much as possible because it takes time and effort to write robust code. You want a solid foundation which can solve a well defined (but not necessarily narrow) range of problems. Some modules can be used to solve many different problems, in completely different business domains but they may have a very simple, well defined interface. Think of a screw... Very simple, well-defined interface, can be applied to a huge range of use cases.
> I think they are orthogonal at worst, and in agreement at best.
I have considered them being orthogonal, but then the definition of the "shell" and "core" becomes problematic in this comparison. What you call shell in GCSS is not shell in FCIS at all, more like a boundary. Even there it is questionable whether boundary should be more specific than the core. At the core, things can be more integrated than at the boundary, and so it can have more business-specific logic.
The definition question is, if you take an application, where is the business logic, is it in the "core" or not? I would say it is, literally what the application's main purpose is its "core". And "shell" is similarly well-defined. For example, UI without an engine implementing the actual logic is just a "shell".
I am not disputing GP's advice as you understand it, although I feel it is perhaps a little bit simplistic if not tautological ("prefer generic building blocks where possible"), and really muddles up what the core and shell is in the FCIS meaning.
One way to get some intuition with FCIS is to write some Haskell.
Because Haskell programs pretty much have to be FCIS or they won't compile.
How it plays out is...
1. A Haskell program executes side effects (known as `IO` in Haskell). The type of the `main` program is `IO ()`, meaning it does some IO and doesn't return a value - a program is not a function
2. A Haskell program (code with type `IO`) can call functions. But since functions are pure in Haskell, they can't call code in `IO`.
3. This doesn't actually restrict what you can do but it does influence how you write your code. There are a variety of patterns that weren't well understood until the 1990s or later that enable it. For example, a pure Haskell function can calculate an effectful program to execute. Or it can map a pure function in a side-effecting context. Or it can pipe pure values to a side-effecting stream.
I used to write lots of haskell before deciding it didn't meet my needs. However, the experience provided lots of long-term benefits, including a FCIS design mindset.
Recently, I did a major python refactoring project, converting a prototype/hack/experiment into a production-quality system. The prototype heavily intermixed IO and app logic, and I needed to write unit tests for the production system. Even with fixtures and mocking, unit testing was painful and laborious, so our test coverage was lousy.
Partitioning the core classes into pure and impure components was the big win. Unit testing became trivial and we caught lots of bugs in the original business logic. More recently, we changed the IO from files to a DB and having encapsulated the IO was also a win.
Full algebraic data types wouldn't have added much here. Product types are already everywhere, and we didn't need sum or exponential types.
Splitting IO and pure code was just routine refactoring, not a full redesign. Our app logic wasn't strictly pure because it generates pseudorandom numbers and logs events, but practically speaking, splitting the IO and shell from the relatively pure app logic made for much cleaner code.
In retrospect, I consider FCIS a good practice that I first learned with Haskell. It's valuable elsewhere, even when used in a less formal way than Haskell mandates.
> then the definition of the "shell" and "core" becomes problematic in this comparison
I agree -- if you're trying to make the words "shell" and "core" mean the same things between FCIS and GCSS, or identify the same parts of a program, then there will be problems. I think FCIS and GCSS are just two different ways of analyzing a program into pieces. Just as the number 8 can be seen through addition as 3 + 8 and through multiplication as 2 * 4, a program can be analyzed in multiple ways. If you view a program through the lens of FCIS, you expect to see a broad region of the program in which side-effects don't occur and a narrow region in which they do. If you view a program through the lens of GCSS, you expect to see broad parts of the program that solve general problems, and narrower regions in which those problems are instantiated in specific. The narrower regions are all "shell"-shaped, but that doesn't mean they are "the" shell. They have in common simply that they wrap a bulk of functionality to interface it to a larger context.
> At the core, things can be more integrated than at the boundary, and so it can have more business-specific rules.
I tend to disagree. Decomposition is a fundamental part of software engineering: we decompose a large problem into smaller ones, solve those, them compose those solutions into a solution to the large problem (c.f. Parnas' "On the criteria to be used in decomposing systems into modules"). It is often easier to solve a more general problem than the one originally given (Polya's principle). Combining the two yields GCSS.
A solution to each individual small problem can be construed as having its own generic core, and the principles used in composing sibling solutions constitute the specific shells that wrap them, allow them to interface, and together implement a solution to a higher-level problem.
Because there are multiple of these "cores", each solving a decomposed part of the top-level problem, it's hard for me to see how "At the core, things can be more integrated than at the boundary".
> The definition question is, if you take an application, where is the business logic, is it in the "core" or not?
I don't mean to be a sophist, but I think I need a more precise meaning of "business logic" before I can answer this question. In the process of solving successively smaller (and more general) problems, we abstract away from the totality of the business problem being solved, and address smaller and less-specific aspects of that problem. It may be that each subproblem's solution is architected as an individual instance of FCIS, as is often argued for microservice architectures; or that each subproblem is purely functional and only the top-level solution is wrapped in an imperative core; or anywhere in between. Needless to say, I think that choice is orthogonal.
As a result, I would say that the business logic itself has been factorized and distributed across the many subproblems and their solutions, and that indeed the "specific shell"s that are responsible for specializing solutions toward the specific case of the business need may necessarily include business logic. For instance, when automating a business process, one often needs to perform a complex step A before a complex step B. While both A and B might be independently solvable, orchestrating them together is still "business logic", because they need to be performed in order according to business needs.
(In all of this, perhaps you can see why I don't think the "core" and "shell" of FCIS should be identified with the "core" and "shell" of GCSS. Words are allowed to have contextual meanings!)
Sure, but this discussion is about FCIS, that's the context, and the GP should consider that.
" think I need a more precise meaning of "business logic" before I can answer this question"
Well, some examples. A tax application - the tax calculation according to the law. A word processor - layouting and rendering engine. A video game - something that calculates the state of the world, according to the rules of the game.
So a game is a good example where the core can be more specialized than the shell. You can imagine a generic UI library shared by a bunch of games, but a generic game rules engine - that's just a programming language.
"Decomposition is a fundamental part of software engineering: we decompose a large problem into smaller ones, solve those, them compose those solutions into a solution to the large problem"
There is a big misconception in SW engineering that the above decomposition always exists in a meaningful way. Take the tax calculation for example. That cannot be decomposed into pieces that are generic, and potentially reusable elsewhere. It's just a list of rules and exceptions that need to be implemented as stated. You can decompose it into "1st part of calculation" and "2nd part of calculation", but that's meaningless (unhelpful). (Similarly for the game example above, the rules only exist in the context of other rules.)
Surprisingly many problems are like that, and that makes them kinda difficult to test.
> Take the tax calculation for example. That cannot be decomposed into pieces that are generic, and potentially reusable elsewhere. It's just a list of rules and exceptions that need to be implemented as stated. You can decompose it into "1st part of calculation" and "2nd part of calculation", but that's meaningless (unhelpful). (Similarly for the game example above, the rules only exist in the context of other rules.)
As someone who does this himself for taxes, you're looking only at the "specific shell" part. The generic core is the thing that does the math - spreadsheet, database, whatever. The tax rules are then imposed on top of that core.
Well you can claim that the core is the programming language, in which we write those tax rules, but that's not a very useful distinction IMHO (for how to write programs in the language).
That's only the case where a usable generic core already exists. A great example where it didn't exist is python's "requests" library: https://requests.readthedocs.io/en/latest/
The example on the homepage is the "specific shell" - simple and easy to use, and by far the most common usage, but if you scroll down the table of contents on the API page (https://requests.readthedocs.io/en/latest/api/) you'll see sections titled "Lower-Level Classes" and "Lower-Lower-Level Classes" - that's the generic core, which the upper level is implemented in terms of.
I like this example :) Another good example might be Git's distinction between "porcelain" and "plumbing"; the porcelain is implemented in terms of the plumbing, and gives a nicer* interface in terms of what people generally want to do with Git, but the plumbing is what actually does all the general, low-level stuff.
> Take the tax calculation for example. That cannot be decomposed into pieces that are generic, and potentially reusable elsewhere.
Quite right! However, the tax code does change with some regularity, and we can expect that companies like Intuit should have gotten quite good by now -- even on a pure profit motive -- at making it possible to relatively quickly modify only the parts of their products that require updating to the latest tax code. To put it another way, while it might be the case that the tax code for any given year is not amenable to decomposition, all tax codes within a certain span of years might be specific instances of a more general problem. (I recall a POPL keynote some years back that argued for formalizing tax codes in terms of default logic!) By solving that general problem, you can instantiate your solution on the given year's tax code without needing to recreate the entire program from scratch.
To be clear, I'm the one who brought subproblem decomposition into the mix, and we shouldn't tar the top-level commenter with that brush unnecessarily. Of course some problems will be un-decomposable "leaves". I believe their original point, about a specific business layer sitting on top of a more general core, still applies.
> So a game is a good example where the core can be more specialized than the shell. You can imagine a generic UI library shared by a bunch of games, but a generic game rules engine - that's just a programming language.
As it happens, the "ECS pattern" (Entity, Component, and System) is often considered to be a pretty good way of conceptualizing the rules of a game. An ECS framework solves the general problem (of associating components to entities and executing the systems that act over them), and a game developer adapts an ECS framework to their specific needs. The value in this arrangement is precisely that, as the game evolves and takes shape, only the logic specific to the game needs to be changed. The underlying ECS framework is on the whole just as appropriate for one game as for any other.
(I could also make a broader point about game engines like Unity and Unreal, and how so many games these days take the "general" problem solved by these engines and adapt them to the "specific" problem of their particular game. In general, nobody particularly wants to make engine-level changes for each experiment during the development of a game, even though sometimes a particular concept for a game demands a new engine.)
> Sure, but this discussion is about FCIS, that's the context, and the GP should consider that.
I understood the original commenter as criticizing FCIS (or at least the original post, as "grasping for straws") and suggesting that GCSS is generally more appropriate. In that context, I think it's natural to interpret their use of "core" and "shell" as competing rather than concordant with FCIS.
>GCSS tells us we shouldn't simply solve the one and only problem in front of us; we should use our eyes and ears and human brains to understand the context in which that problem exists
This violates KISS and YAGNI and potentially leads to overengineering code and excessive abstraction
Everything "potentially leads" to adverse outcomes if not applied with due care and cognizance. That includes KISS and YAGNI. If you're looking for a principle you can apply in 100% of cases without consideration of context, I'm afraid you'll need to shop elsewhere.
That's the gotcha though. Everything applied with due care and cognizance works. This is not what is being discussed here. What the author suggests does lead to overengineering though. Think of stereotypical enterprise Java code if you need examples
The context was "business", that kind of application is developed quite differently than, say, a cool little hobby terminal emulator or whatever.
Even though the business currently doesn't have a need to e.g. support any other currency than USD and EUR, an experienced developer will clearly see that it is unlikely to stay that way for long, so doing some preliminary preparation for generalizing currencies may well worth the time.
>Even though the business currently doesn't have a need to e.g. support any other currency than USD and EUR
Your regular business requirements are way more complex than just a currency list. This is like trying to justify your point using an oversimplified example imo.
I think the "generic core" is often a SQL database or other such generic storage/analytics layer, while the "functional core" is the business logic that operates on the specific domain objects.
A functional core can actually be very generic. There is nothing in the functional paradigm, that makes functions less generic. They are as generic or specific as you write them.
I've seen it many times. And then every task takes longer than the last one, which is what pushes teams to start rewrites.
"There's never enough time to do it right, but always time to do it again."
> Probably many reasons for this, but what I've seen often is that once the code base has been degraded, it's a slippery slope downhill after that.
Another factor, and perhaps the key factor, is that contrary to OP's extraordinary claim there is no such thing as objectively good code, or one single and true way of writing good code.
The crispest definition of "good code" is that it's not obviously bad code from a specific point of view. But points of view are also subjective.
Take for example domain-driven design. There are a myriad of books claiming it's an effective way to generate "good code". However, DDD has a strong object-oriented core, to the extent it's nearly a purist OO approach. But here we are, seeing claims that the core must be functional.
If OP's strong opinion on "good code" is so clear and obvious, why are there such critical disagreements at such a fundamental levels? Is everyone in the world wrong, and OP is the poor martyr that is cursed with being the only soul in the whole world who even knows what "good code" is?
Let's face it: the reason there is no such thing as "good code" is that opinionated people making claims such as OP's are actually passing off "good code" claims as proxy's for their own subjective and unverified personal taste. In a room full of developers, if you throw a rock at a random direction you're bound to hit one or two of these messiahs, and neither of them agrees on what good code is.
Hearing people like OP comment on "good code" is like hearing people comment on how their regional cuisine is the true definition of "good food".
The original 2003 DDD book is very 2003 in that it is mired in object orientation to the point of frequently referencing object databases¹ as a state-of-the-art storage layer.
However, the underlying ideas are not strongly married to object orientation and they fit quite nicely in a functional paradigm. In fact, ideas like the entity/value object distinction are rather functional in and of themselves, and well-suited to FCIS.
> The original 2003 DDD book is very 2003 in that it is mired in object orientation to the point of frequently referencing object databases¹ as a state-of-the-art storage layer.
Irrelevant, as a) that's just your own personal and very subjective opinion, b) DDD is extensively documented as the one true way to write "good code", which means that by posting your comment you are unwittingly proving the point.
> However, the underlying ideas are not strongly married to object orientation and they fit quite nicely in a functional paradigm.
"Underlying ideas" means cherry-picking opinions that suit your fancy while ignoring those that don't.
The criticism on anemic domain models, which are elevated to the status of anti-pattern, is more than enough to reject any claim on how functional programming is compatible with DDD.
And that's perfectly fine. Not being DDD is not a flaw or a problem. It just means it's something other than DDD.
But the point that this proves is that there is no one true way of producing "good code". There is no single recipe. Anyone who makes this sort of claim is either both very naive and clueless, or is invested in enforcing personal tastes and opinions as laws of nature.
> "Underlying ideas" means cherry-picking opinions that suit your fancy while ignoring those that don't.
Yes, that is how terminology evolves to not meet a rigid definition that was defined in a different era of best-practice coding beliefs. I'll admit I had trouble mapping the DDD OO concepts from the original book(s) to systems I work on now, but there are more recent resources that use the spirit of DDD, Domain Separation, and Domain Modeling outside of OO contexts. You're right in that there is no single recipe - take the good ideas and practices from DDD and apply it as appropriate.
And if the response is "that's not DDD", well you're fighting uphill against others that have co-opted the buzzword as well.
> Irrelevant, as a) that's just your own personal and very subjective opinion
Yes? And it's just your personal, subjective opinion that this is irrelevant. Most meaningful judgments are subjective. Get used to it.
> DDD is extensively documented as the one true way to write "good code"
Who said this? I've seen it described as a good way to write code, and as a way of avoiding problems that can crop up in other styles. But never as the only way to write good code.
> "Underlying ideas" means cherry-picking opinions that suit your fancy while ignoring those that don't.
No it doesn't. What? The only way I can make sense of what you're saying is if you're cynical toward the very concept of analyzing ideas, which is perhaps the most anti-intellectual stance I can imagine.
> The criticism on anemic domain models [...] is more than enough to reject any claim on how functional programming is compatible with DDD.
Why would an author's criticism of a certain style of OOP make a methodology they have written about incompatible with non-OOP paradigms? That's like saying that it's impossible to make strawberry ice cream because the person who invented ice cream hates strawberries.
> But the point that this proves is that there is no one true way of producing "good code".
There's no "one true way" to build a "good bridge," but that doesn't mean bridge design is all a matter of taste. Suspension bridges can carry a lot more than beam bridges; if you want to drive 18-wheelers across a wide river, a beam bridge will collapse, while a suspension bridge will probably be "good."
> Meanwhile there are articles I wrote years ago which explain clearly from first principles why the correct philosophy is ...
I think this is a very common mistake. You've spent years, maybe decades, writing code and now you want to magically transfer all that experience in a few succinct articles. But no advice that you give about "the correct philosophy" is going to instantly transfer enough knowledge to make all large companies write good code, if only they followed it. Instead, I'm sure it's valuable advice, but more along the lines of a fragment within a single day of learning for a diligent developer.
A company I worked recently had a more extreme version of this mistake. It had software written in the 1980s based on a development process by Michael Jackson (no, not that one!), a software researcher that had spent his whole career trying to come up with silly processes that were meant to fix software development once and for all; he wrote whole books about it. I remember reading a recent interview with him where he mourns that developers today are interested in new programming languages but not development methodologies. (The code base I worked on was fine by the way, given that it was 40 years old, but not really because of this Jackson stuff.)
I'm reminded of the Joel on Software article [1] where he compares talented (naturally or through experience) developers as being like really talented expert chefs, and those following some methodology as being like people working at McDonald's.
> But no advice that you give about "the correct philosophy" is going to instantly transfer enough knowledge to make all large companies write good code, if only they followed it.
Good old "Programming as Theory Building". It's almost impossible to achieve this kind of transfer without already having the requisite lived experience.
These are great and succinct, yours and your teammate’s.
I still find myself debating this internally, but one objective metric is how smoothly my longer PTOs go:
The only times I haven’t received a single emergency call were when I left teammates a a large and extremely specific set of shell scripts and/or executables that do exactly one thing. No configs, no args/opts (or ridiculously minimal), each named something like run-config-a-for-client-x-with-dataset-3.ps1 that took care of everything for one task I knew they’d need. Just double click this file when you get the new dataset, or clone/rename it and tweak line #8 if you need to run it for a new client, that kind of thing.
Looking inside the scripts/programs looks like the opposite of all of the DRY or any similar principles I’ve been taught (save for KISS and others similarly simplistic)
But the result speaks for itself. The further I go down that excessively basic path, the more people can get work done without me online, and I get to enjoy PTO. Anytime i make a slick flexible utility with pretty code and docs, I get the “any chance you could hop on?” text. Put the slick stuff in the core libraries and keep the executables dumb
I see a similar problem in infra-land where people expose too many config variables for too many things, creating more cruft. Knowing what to hardcode and what to expose as a var is something a lot of devs don't seem to understand; and don't realise they don't understand.
Oh definitely, many headaches untangling massive “variables.tf” files where the value is identical in 100% of the target environments, and would be nonsensical to change without corresponding changes in the infra config resources/modules as well.
My favorite are things where security policy mandates something like private networking and RBAC, and certain resources only have meaning in those contexts, for heavens sake why are we making their basic args like “enforce_tls” or “assign_public_ip” or “enable_rbac” into variable params for the user to figure out
Yes I feel that when to apply certain techniques is frequently under-discussed. But I can't blame people for err-ing on the side of 'do everything properly' - as this makes life more pleasant in teams.
Although I think if you squint, the principle still applies to your example. The further you get from the 'core' of your platform/application/business/what-have-you, the less abstract you need to be.
Non-functional core tends to become a buggy mess, with workarounds in the shell to account for those bugs, and then one needs to know about the nature of the core to correctly use the shell and so on. Functional core will lend itself very well to unit tests. Writing them will almost be trivial, when functional core is done right. Imperative shell is then less of an issue, because the blast radius of bugs is reduced to one usage of the core. The imperative shell should be kept as small as reasonably possible of course.
I would even go so far to say, that large companies even struggle with this more than small ones. The amount of people needing to know how to build things properly is larger than in a small company, where one knowledgeable engineer might already be sufficient. Too many cooks spoiling the soup(/broth?). And lots of people are cooking these days.
Isn't this saying business layer should not be on the top?
Business layers should be accessible via an explicit interface/shape that is agnostic to the layers above it. So if the org decides to move from mailchimp to some other email provider the business logic can remain untouched and you just need to write some code mapping the new provider to the business logic's interface.
Maybe our visualizations are mixed up, but I always viewed things like cloud providers, libraries etc. as potentially short lived whereas the core logic could stick around forever.
> The more specific, the more brittle. The more general, the more stable. Concerns evolve/decay at different speeds, so do not couple across shearing layers. Notice how grammar/phonology (structure) changes slowly while vocabulary (functions, services) changes faster.
...
> Coupling across layers invites trouble (e.g. encoding business logic with “intuitive” names reflecting transient understanding). When requirements shift (features, regulations), library maintainers introduce breaking changes or new processor architectures appear, our stable foundations, complected with faster-moving parts, still crack!
In my head, and the way its describe the generic and specific are swapped. The core handles a specific, pure problem. The core is generic (do you get the data via http, databases, filesystem, etc) then becomes irrelevant to the core problem
> While internal modules and libraries should be kept as generic as possible, external-facing components, on the other hand, are a good place to put business-specific domain logic. External-facing components here refer not only to views but also to any kind of externally-triggered handlers including external API endpoints (e.g. HTTP/REST API handlers).
That goes against every bit of advice and training I've ever gotten, not to mention my experience designing, testing, and implementing APIs. Business logic belongs in the data model because of course the rules for doing things go with the things they operate on. API endpoints should limit themselves to access control, serialization, and validation/deserialization. Business logic in the endpoint handler—or worse, in the user interface—mixes up concerns in ways that are difficult to validate and maintain.
I actually remember early in my career working for a small engineering/manufacturing prototyping firm which did its own software, there was a senior developer there who didn't speak very good English but he kept insisting that the "Business layer" should be on top. How right he was. I couldn't imagine how much wisdom and experience was packed in such simple, malformed sentences. Nothing else matters really. Functional vs imperative is a very minor point IMO, mostly a distraction.