Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I guess that's the main problem.

And look, defining a problem functionally works great on 10% of the cases, but it complicates some 40% of other cases (very imprecise numbers).

The world is not functional, as it turns out. And a lot of those cases where you can write functionally don't gain much from a performance or correctness perspective compared to the procedural version



> The world is not functional

What does this even mean? If the world is not functional, then what is it? Is the world procedural? And what world are we talking about? Our planet, physically? Are you then discounting the worlds of mathematics and logic? You don’t gain performance? What performance? Program execution speed? Development pace and time to market?

Compared to the procedural version? Is your view that procedural programming is inherently better — for any definition of the word — regardless of context? Would SQL queries be easier to write if we told the query planner what to do? Is the entire field of logic programming — and by extension expert systems and AI — just a big waste of time?

So many vague aphorisms which do nothing to further the debate. And Haskellers are the ones getting called “elitist”!


>> The world is not functional

> What does this even mean? If the world is not functional, then what is it?

The world is full of special cases, of corner cases, of exceptions, of holes, of particular conditions, of differences for first or last element, and other irregularities; it is also full of sequential events. Applying a function to a starting set, to produce a resulting set or value is very lean with clean nice mathematical objects and relations, but not with many real-world problems.

> And Haskellers are the ones getting called “elitist”!

Well, one may say that answering with questions could fit the bill...


> The world is full of special cases, of corner cases, of exceptions, of holes, of particular conditions, of differences for first or last element, and other irregularities; it is also full of sequential events.

True

> Applying a function to a starting set, to produce a resulting set or value is very lean with clean nice mathematical objects and relations

True

> but not with many real-world problems.

Debatable, but in any case a non-sequitur. Are you sure you're talking about functional languages as they're used in reality?

I once wrote a translator between an absurdly messy XML schema called FpML (Financial products Markup Language) and a heterogeneous collection of custom data types with all sorts of "special cases, corner cases, exceptions, holes, etc.". I wrote it in Haskell. It was a perfect fit.

https://en.wikipedia.org/wiki/FpML


> The world is full of special cases, of corner cases, of exceptions, of holes, of particular conditions, of differences for first or last element, and other irregularities; it is also full of sequential events.

Yes. All of which are modelled in Haskell in a pretty straightforward manner. I’d argue Haskell models adversity like this better than most languages.

> but not with many real-world problems.

Haskell is a general purpose language. People use it to solve real world problems every day. I do. My colleagues do. A huge amount of people do.

> Well, one may say that answering with questions could fit the bill...

I see. So trying to define the terms to enable constructive discourse is elitist. Got it.

If you want me to be more concrete and assertive, fine. No problem. Here we go.

You are wrong.


It's not vague, processors are still procedural. Network, disk, terminals, they all have side effects. Memory and disk are limited.

SQL queries are exactly one of those cases where functional expression of a problem outperforms the procedural expression, and that's why they're used where it matters.


Are you suggesting functional programming is somehow so resource intensive that it is impractical to use? Or that functional programming makes effectful work impractical?

Because neither of those are true.

You’ve conceded that SQL queries are one case where a functional approach is more ergonomic (after first asserting that the world is not functional, whatever that means). Why aren’t there other cases? Are you sure there aren’t other cases? One could argue that a functional approach maps more practically and ergonomically for the majority of general programming work than other mainstream approach.


> Are you suggesting functional programming is somehow so resource intensive that it is impractical to use? Or that functional programming makes effectful work impractical? Because neither of those are true.

No, I'm saying the functional abstraction doesn't work or is clunky in a lot of cases and that Haskell's approach of making it the core of the language is misguided (Lisp is less opinionated than Haskell for one).

> One could argue that a functional approach maps more practically and ergonomically for the majority of general programming work than other mainstream approach.

They could argue, but they would be wrong.

SQL works because it's a strict abstraction on a very defined problem.

Functional is great when all you're thinking about is numbers or data structures.

But throw in some form of IO or even random numbers and you have to leave the magical functional world.

And you know why SQL works so fine? Because there are a lot of people working hard in C/C++ doing kernel and low-level work so that the "magic" is not lost on the higher level abstractions. And can you work without a GC?


> I'm saying the functional abstraction doesn't work or is clunky in a lot of cases and that Haskell's approach of making it the core of the language is misguided

Functional programming certainly doesn't work literally everywhere, but to say Haskell's design is "misguided" is your opinion, and it's one that some of the biggest names in the industry reject. How much experience do you have designing programming languages? Or even just building non-trivial systems in Haskell? Judging by the evident ignorance masked with strong opinions I'd say around about the square root of diddly-nothing.

> But throw in some form of IO or even random numbers and you have to leave the magical functional world.

Wrong. Functional programming handles IO and randomness just fine.

> And you know why SQL works so fine? Because there are a lot of people working hard in C/C++ doing kernel and low-level work so that the "magic" is not lost on the higher level abstractions

Are you suggesting there aren't a lot of people working hard on GHC? Because if you are — and you seem to be — then again you would be wrong.


> processors are still procedural

Exactly. I don't know why Haskell fanboys insist on abstracting us so far from the machine. Declarative programming is really unhelpful because it says nothing about the order in which code executes.

We live in a mutable physical universe that corresponds to a procedural program. One thing happens after another according to the laws of physics (which are the procedural program for our universe). Electrons move one after another around circuits. Instructions in the CPU happen one after another according to the procedural machine code.

The universe would never consider running 2020 before 2019 and a CPU will never execute later instructions before earlier instructions.

Haskell fanboys talk about immutable data structures like it's beneficial to go back in time and try again. But it's a bad fit for the CPU. The CPU would never run some code and then decide it wants to go back and have another go.


You're saying a lot of wrong things about CPUs. CPUs do execute instructions "out-of-order", and they do speculative execution and rollback and have another go. Branch prediction is an example.

All of this is possible only with careful bookkeeping of the microarchitectural state. I agree the CPU is a stateful object. But even at the lowest level of interface we have with the CPU, machine code, there are massive gains from moving away from a strict procedural execution to something slightly more declarative. The hardware looks at a window of, say, 10 instructions, deduces the intent, and executes the 10 instructions in a better order, which has the same effect. (And yes, it's hard for me to wrap my head around it, but there is a benefit from doing this dynamically at runtime in addition to whatever compile-time analysis.) In short, it is beneficial to go back and have another go.

This was demonstrated also in https://snr.stanford.edu/salsify/. If you encode a frame of video, but your network is all of the sudden to slow, you might desire to encode that frame at a lower quality. Because these codecs are extremely stateful (that's now temporal compression works), you have to be very careful about managing the state so for can "go back and have another go".

I am less confident about it, but what you say about the universe also seems wrong. What physical laws do you know take the form of specifying the next state in terms of the preceding state? And literally many of them are time-reversible.


Thanks. It was an attempt at parody but apparently I didn't lay it on thick enough. I'll try to up my false-statements-per-paragraph count next time.


What you wrote about CPUs many people believe, and many simpler CPUs operate like that. So it was difficult for me to detect as parody. Sorry that I missed it! It's funny in hindsight.

Not sure what the parody was in computing 2020 before 2019.


> Exactly. I don't know why Haskell fanboys insist on abstracting us so far from the machine.

I believe it is because abstractions are the way we have always made progress. Is the C code that's so close to the machine not just an abstraction of the underlying assembly, which is an abstraction of the micro operations of your particular processor, which in turn is an abstraction of the gate operations? The abstractions allow us to offload a significant part of mental task. Imagine trying to write an HTML web page in C. Sure it's doable with a lot of effort, but is it as simple as writing it using abstractions such as the DOM?

> We live in a mutable physical universe that corresponds to a procedural program. One thing happens after another according to the laws of physics (which are the procedural program for our universe).

You just proved why abstractions are useful. "One thing happens after another" is simply our abstraction of what actually happens, as demonstrated by e.g. the quantum eraser experiment [1][2].

[1] https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser

[2] https://www.youtube.com/watch?v=8ORLN_KwAgs


>I believe it is because abstractions are the way we have always made progress.

>The abstractions allow us to offload a significant part of mental task

Edge/corner cases in our abstractions is also how propagation of uncertainty[1] happens. You can't off-load error-correction [2]

[1] https://en.wikipedia.org/wiki/Propagation_of_uncertainty

[2] https://en.wikipedia.org/wiki/Quantum_error_correction


I'm not sure what you mean by "You can't off-load error-correction". In the case of classical computing, we do off-load error-correction (I don't have to worry about bit flips while typing this). In the case of quantum computing, if we couldn't offload error-correction, an algorithm such as Shor's wouldn't be able to written down without explicitly taking error-correction into account. Yet, it abstracts this away and simply assumes that the logical qubits it works with don't have any errors.


> Instructions in the CPU happen one after another

https://en.wikipedia.org/wiki/Superscalar_processor

> a CPU will never execute later instructions before earlier instructions

https://en.wikipedia.org/wiki/Out-of-order_execution

> The CPU would never run some code and then decide it wants to go back and have another go

https://en.wikipedia.org/wiki/Speculative_execution


Actually the FP fanboys, all the way back to those IBM mainframes where Lisp ran.

Even C abstracts the machine, the ISO C standard is written for an abstract machine, not high level Assembly like many think it is.

Abstracting the maching is wonderfull, it means that my application, if coded properly, can scale from a single CPU to a multicore CPU + coupled with GPGPU, distributed across a data cluster.

Have a look at Feynamm's Connection Machines.


> The universe would never consider running 2020 before 2019 and a CPU will never execute later instructions before earlier instructions.

Except thanks to a compile-time optimization.


And ooo-execution that happens at _runtime_.


This 1000 times.

The reason SQL (really - relational algebra) works so well is precisely because relational data is strongly normalized [1].

But the data is only a model of reality, not reality itself. And when your immutable model of reality needs to be updated strong normalisation is a curse, not a blessing. The data structure is the code structure in a strongly-typed system[2]

Strong normalisation makes change very expensive.

[1] https://en.wikipedia.org/wiki/Normalization_property_(abstra...

[2] https://en.wikipedia.org/wiki/Code_as_data


> The world is not functional, as it turns out.

Have you watched the talk "Are we there yet?" by Rich Hickey? He makes a convincing case, referencing the philosophy of Alfred North Whitehead, that "the world is functional".

http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hic...


I like hickey but a lot of his examples are data driven domains where we can indeed model the world in functional or accounting-like terms.

Where this falls short is domains that are extremely ill-suited to keeping track of past states over time (his identity idea) simply because it would break performance or be hard to model in those terms, say simulations, game development, directly working with hardware and so on.

Much of his argument relies on the GC and internal implementation being able to optimise away the inefficiences of the functional model that needs to recreate new entities over and over, but this simply is not always enough.

This also is very obvious if you look into the domains where Clojure or other declarative functional languages have success, it's almost always business logic / data pipeline work.

edit: And in fact a lot of his most salient points aren't really as much about functional programming as they are about lose coupling and dynamic typing. A lot of his criticism of state and identity in languages like Java isn't related to Java not being functional, it's related to Java breaking with the idea of late binding and treating objects as receivers of messages rather than "do-ers".


> it would break performance or be hard to model in those terms, say simulations, game development, directly working with hardware and so on

Performance seems to be the problem there more than the model not fitting. And Rich’s answer would probably be that Clojure isn’t the right tool for those tasks in the same way that any general-purpose GC’d language isn’t.

Modeling a game or sim as a successive series of immutable states sounds great to me, so I’m curious where you see a mismatch with them. Abstracting hardware properties this way sounds useful too, but I’ve not worked with it enough to comment.


>Modeling a game or sim as a successive series of immutable states sounds great to me, so I’m curious where you see a mismatch with them.

because I don't really think the functional model describes it in an intuitive way. You can model a sim or game at a high level like worldstate1 -> worldstate2 etc.. but it doesn't really tell you much, because often you don't really care what the state was a second ago anyway in particular not in its entirety, and because at that high level of abstraction you don't really get any useful information out of it, so there's no point in tracking it in the same way it makes sense to track a medical history or a bunch of business transactions.

Rather instead of thinking of games or sims as high level abstract transitioning states we tend to reason about them as a sort of network of persistent agents or submodules and that lends itself much closer to a OO or message based view of the world.

I think in many systems that are highly complex and change very incrementally reasoning about things in terms of functions doesn't really tell you much. You can for example reason about a person as like say a billion states through time but there's not much benefit to it at least to me.


That makes sense, and the actor model (well, entity-behavior, which is fairly close) has been what I reached for in the past when doing game dev.

I agree that looking at a whole worldstate at once is unlikely to be useful. But I do see great value in keeping state changes isolated to the edges of a system, and acting upon it with pure functions. If you have the means to easily get at the piece of state tree that you’re interested in, you can reason more clearly about how each piece of a simulation reacts to changes by just feeding the relevant state chunk to it and seeing what comes back. That takes more work to set up or repeat if each agent tracks its own internal state.

I haven’t used Clojure to make a game before, so I’m speculating here. I find myself avoiding internal state by habit lately, though of course “collections of relevant functions” are valuable tools. I just lean towards using namespaced functions instead of objects with methods.


A lot of what's done in software eventually ends up getting hardware support. Our current von-Neumann/modified-Harvard hardware architectures are old and it's precisely thinking outside these lines that will lead to non-local maxima.


In a functional world persistence (e.g memory) cannot be a thing.

The case is convincing, but it requires you to reject your own, human, faculties.

Makes the use/expression of language rather awkward when you can't remember any words.


> In a functional world persistence (e.g memory) cannot be a thing.

Why not?

To give a functional Haskell flavoured example, I can use the Reader and Writer monads (which are functional) to create a whole bunch of operations which write to and read from a shared collection of data. That feels a lot like memory to me.

Indeed, the Reader monad is defined as:

> Computations which read values from a shared environment.

I just don't understand the whole "you can't have memory / order of operations / persistence / whatever else" as an argument against functional concepts when they have been implemented in functional ways decades ago. The modern Haskell implementation of the writer monad is an implementation of 1995 paper.

Edit: it looks like who I responded to doesn't actually want to have a reasonable discussion, but for anyone else reading along, it's entirely possible to have functional "state" or "memory" - what makes it functional is that the state / memory must be explicitly acknowledged.

Trying do dismiss functional computation in this way is essentially a no true Scotsman; functional computation is useless because it can't do X (X being memory or persistence or whatever), but when someone presents a functional computation that does do X, it's somehow not a "real" functional computation precisely because it does X. Redefining functional computation as "something that can't do X" doesn't help anyone, and doesn't actually help with discussing the pros and cons of functional programming since you're not actually discussing functional computation but some inaccurate designed-to-be-useless definition mislabeled as functional computation.


>what makes it functional is that the state / memory must be explicitly acknowledged

Isn't state/memory assumed by default when talking about computation?

What kind of computations you can perform on a Turing machine without a ticker tape?


Lambda calculus, for example, has no ticker tape.


What kind of computations can you perform with Lambda calculus without any input variables?

What are you applying your α-conversion and β-reduction operations to?


[flagged]


If I am Euthyphro, then you are Socrates - no?

I thought Socrates was the troll.


What is this thing that you reading from and writing to?

It sounds mutable.


I’m sorry if immutability is a new concept to you, but you can read from one state, and write to a new state.

There you go. No mutation!


Writing state is the definition of mutation.

It's what all data storage systems do.

An immutable data store sounds pretty useless.

https://en.wikipedia.org/wiki/Persistence_%28computer_scienc...


It’s fine for you to think that. That’s not going to stop other people from finding these concepts useful though.

Once again, you are free to ignore the things you don’t understand. Your trolling is mostly harmless.


>you are free to ignore the things you don’t understand

Is that why you are ignoring the complexity behind persistence/mutability/state?

Even the Abstract Turing machine has ticker tape you know...


Interesting point. Lambda calc is equivalent, where the expression can be seen as equivalent to the ticker tape. So when a reduction occurs in lambda calculus, it's just like a group of symbols being interpreted and overwritten. But the thing is, no one has to overwrite the symbols to "reduce" them. It's just a space (memory) optimization. Usually to calculate something (you're fully correct by the way) we do overwrite symbols. The point of using immutable style is simply to make sure that references don't become overwritten. It sucks to have x+2=8 and x+2=-35.4 within the same scope, right? Especially when the code that overwrote x was in a separate file in an edge case condition you didn't consider.


Thanks for your input, Euthyphro!


My knack for spotting performative contradictions didn't sway you towards Deskartes?


Actually, according to quantum physics, the world is functional. The Universe can be described by completely by a function (the universal wave function) and is completely time-reversible; you can run it forwards or backwards because it has no side-effects.


The collapse of a wave function into a single state that apparently happens on observation is as far as we know non-deterministic. The wave function itself is a probability amplitude. So I think it's a misinterpretation to say that "the universe can be described completely by a function" when everything we can actually know about the universe is observable and has necessarily collapsed from a probability amplitude of possible states into the observed state.

My interpretation is the exact opposite. Newtonian physics suggests the fundamentally deterministic world that QM can't because in the latter the only thing that is deterministic is probabilities.


Collapse of a wave function occurs when one wave becomes entangled with another. If I shine a photon on a particle to measure it, and the particle emits a photon back to me, I am now entangled with that particle through mutual interaction. But to a distant observer, there is no wave collapse. If our universe is purely quantum and exists alone (not being bumped into by other universes), then its wave function would just be evolving unitarily according to it's Hermitian - totally deterministically (the wave function, that is).


I don't know what it means for the universe to be "purely quantum", or what it means to be a "distant observer" yet somehow not interacting. A wave function in itself is not observable. Is it real? Definitional. Is it a physical phenomenon? Dubious. All we know is that as a model it is consistent with what we observe. What actually can be measured (and can reasonably be considered real and physical in that sense) can only be predicted probabilistically using quantum mechanics. It's like saying that I can perfectly predetermine the outcome of a dice roll—it's between one and six.


By that standard every programming language is functional, since the compiled program is a function.


Since I got downvoted let me elaborate: it isn’t relevant that the universe is a function. What matters is whether the universe can be modeled better as the composition of pure functions or as some stateful composition (e.g. interacting stateful objects).

Programming languages aren’t about the final result but about how you decompose the result into modular abstractions.


your just stating claims without any evidence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: