Definitely an interesting read! I have a few comments about the following paragraph specifically, because the potential of convenient string processing is so essential to logic programming:
"As in Prolog, strings are lists of character codes which makes them
automatically suitable for stream processing. Apart from avoiding the
tedious duplication of list and string manipulation operations, this
allows for efficient, asynchronous handling of streams of characters,
so the extra space needed for lists is partially balanced by manipulating
them in a lazy manner, one piece at a time, by communicating processes."
In Prolog, the meaning of double-quoted strings depends on the value of the Prolog flag double_quotes. Only if it is set to the value codes are double-quoted strings interpreted as lists of codes. However, a much better setting is the value chars, and then double-quoted strings are interpreted as lists of 1-char atoms. For example, with this setting, we have:
?- "abc" = [a|Ls].
Ls = "bc".
This is the default value in all of the three newest Prolog systems: Scryer Prolog, Tau Prolog and Trealla Prolog. It was also the original interpretation of strings in the very first Prolog systems, Prolog 0 and Marseille Prolog, which were designed for natural language processing. Lists of characters ensure that strings are very readable, also when emitted in their canonical representation. For example, Scryer Prolog yields:
The mentioned advantage of "avoiding the tedious duplication of list and string manipulation operations" in this way can hardly be overstated: It is massive, because everything one knows about lists carries over directly to strings, including common predicates such as append/3, length/2 and reverse/2 which even beginning Prolog programmers universally know, and the ability to generalize arbitrary elements by using logic variables at desired positions, obtaining partially instantiated terms such as "all strings of length 3" which can be specified symbolically as [_,_,_]. In addition, lists are especially conveniently reasoned about with Prolog's built-in grammar mechanism, definite clause grammars (DCGs), tailor-made for generating and parsing sequences and therefore also strings. And further, "the extra space needed for lists" can be avoided with a compact internal representation of lists of characters, i.e., encoding them as sequences of raw bytes, using UTF-8 encoding. This efficient representation is currently already implemented in Scryer Prolog and Trealla Prolog.
The author doesn't mention it but Concepts, Techniques and Models of Computer Programming by Haridi & Van Roy teaches concurrent logic programming (as part of a tour through a wide variety of programming models, including logic programming, stateful concurrency, and constraint programming). It's a great introduction to the paradigm. The full text is available online: https://www.info.ucl.ac.be/~pvr/VanRoyHaridi2003-book.pdf
If you pass single operations between threads you will run slower, not faster because of the synchronization overhead.
Whenever I parallelize something I build batching in from the beginning. Most other people seem to think it is an optimization you can do later but the way I see if if your goal is to get any speed up at all batching is essential.
> This implementation
additionally can distribute pools of parallel processes over native
OS level threads, thus taking advantage of multicore architectures.
Locking overhead should be small, as data is normally not shared
among processes executing on different threads.
Yeah I agree but as the sibling said if you are not in a distributed setting and have an N:M threading model the overhead can be low enough it's often not such an issue. But otherwise I'm a little unclear whether concurrent logic programming supports e.g. asynchronous comms with pipelined requests. If not I guess to some extent you could do it at the application level?
For my current side project I am making collages out of images I took with my DSLR. I like to use expensive scaling algorithms because I’ll have to look at the result when I am done.
For this the natural ‘batch’ is one image and that justifies the overhead easily.
I like the exploration of a different execution model over a prolog-like syntax. Even without concurrency as a headline feature, this seems like a good basis for a language's computational model. I always felt like prolog default of executing terms in written-order sacrifices too much power for implementation convenience, which rules out a large well of potential optimizations and generalities.
> the order in which these unifications take place is immaterial, any order is fine ... if a logic variable is unbound, then matching a pattern containing such a variable suspends and does not continue until the variable has been bound.
> There is no need to artificially orchestrate how the processes execute in parallel, this is done implicitly like the cells of a spreadsheet - once a required result is available, it may trigger further variables to be bound and processes to be resumed, and so on, until the network of processes settles.
The author notes: "There is a compiler for KL1 to C available [2], which seems to work, but suffers from bit rot." Note that Ueda and colleagues have been updating this software (KLIC): see https://www.ueda.info.waseda.ac.jp/software.html for details. Ueda's work on LMNtal is also very interesting.
Concurrent constraint programming is especially interesting to me -- Saraswat's ideas about a general constraint store (versus the Herbrand domain Prolog is usually instantiated over) are genuinely fascinating to me. I see the relatively recent work on LVars as an example of where that flexibility really pays off.
Please don't pick the most provocative thing in an article and then copy it into the threads to complain about it. The most provocative thing is nearly always something shallow and barely (if at all) related. This leads to shallow/indignant discussions, which are the opposite of what we're going for here.
It would be much better to find the most interesting/surprising/meaty thing and start a thread with that.
(Also, re formatting - see https://news.ycombinator.com/newsguidelines.html: "Please don't complain about website formatting, back-button breakage, and similar annoyances. They're too common to be interesting. Exception: when the author is present. Then friendly feedback might be helpful."
p.s. Your comments are usually excellent—thank you!
What's wrong with the formatting? Plain text has a lot going for it. (I remember when most/all GameFAQs guides were text files just like this -- not to mention the Super Metroid guide [0] that manually justified every paragraph via word choice.)
That's a reasonable criticism. It works better in Reader View (because you can scale the font down, at least in Firefox), but I have to rotate to landscape as well.
Yeah. GameFAQs guides had perfect usability. Now they're multipage HTML documents and searching is impossible. Sometimes the "guide" consists of nothing but YouTube links.
I kept reading but I did have a "hey, that's kind of unfair" reaction at that line. It's not their fault that the speed of light is a hard limit, eh?
There's a chicken and egg problem with compilers and languages on the one side and machines architectures and instruction sets on the other, it's not easy breaking the mold. (GPUs are a counter-example perhaps?)
> It's not their fault that the speed of light is a hard limit, eh?
If you're at Intel, you not likely to see this as a calm period bereft of ideas, while Apple silicon has made such strides. New ideas, or simply a rebalancing of familiar ideas, can make significant progress against the same physics.
actually its a pretty substantive and spot-on comment. the cpu people have been trying _really_ hard to make existing workloads run faster.
but the software people have known for 50 years that concurrency was going to be necessary to go further..but it was never really a good time.
maybe we can finally start looking at other programming models instead of wasting so much time and getting such crappy reliability and performance out of thread-and-lock?
"As in Prolog, strings are lists of character codes which makes them automatically suitable for stream processing. Apart from avoiding the tedious duplication of list and string manipulation operations, this allows for efficient, asynchronous handling of streams of characters, so the extra space needed for lists is partially balanced by manipulating them in a lazy manner, one piece at a time, by communicating processes."
In Prolog, the meaning of double-quoted strings depends on the value of the Prolog flag double_quotes. Only if it is set to the value codes are double-quoted strings interpreted as lists of codes. However, a much better setting is the value chars, and then double-quoted strings are interpreted as lists of 1-char atoms. For example, with this setting, we have:
This is the default value in all of the three newest Prolog systems: Scryer Prolog, Tau Prolog and Trealla Prolog. It was also the original interpretation of strings in the very first Prolog systems, Prolog 0 and Marseille Prolog, which were designed for natural language processing. Lists of characters ensure that strings are very readable, also when emitted in their canonical representation. For example, Scryer Prolog yields: The mentioned advantage of "avoiding the tedious duplication of list and string manipulation operations" in this way can hardly be overstated: It is massive, because everything one knows about lists carries over directly to strings, including common predicates such as append/3, length/2 and reverse/2 which even beginning Prolog programmers universally know, and the ability to generalize arbitrary elements by using logic variables at desired positions, obtaining partially instantiated terms such as "all strings of length 3" which can be specified symbolically as [_,_,_]. In addition, lists are especially conveniently reasoned about with Prolog's built-in grammar mechanism, definite clause grammars (DCGs), tailor-made for generating and parsing sequences and therefore also strings. And further, "the extra space needed for lists" can be avoided with a compact internal representation of lists of characters, i.e., encoding them as sequences of raw bytes, using UTF-8 encoding. This efficient representation is currently already implemented in Scryer Prolog and Trealla Prolog.