Probably not but you can only judge a language when you've written lot of code in it. If you write large non-trivial programs in ― for example ― bash, you'll only have to look up bash.1 so many times after which you just absorb it. Then you'll see how things click together and you don't need the manual much after that.
I like bash. Especially bash v4 brought some niceties, making it even more suitable for various non-trivial tasks.
I generally don't "upgrade" to Python or some "real" scripting language unless I really have to: bash is surprisingly concise and expressive. I've written programs and subroutines in bash that were much smaller than their idiomatic Python counterparts and I was surprised to discover that.
Bash is conceptually pretty safe and suits the functional mindset because in a large program you find you want to isolate each function the Unix way: byte streams. You pass in a stream (and args) and you get a stream out (and exit status). Thus, you end up making the functions quite pure and well off if you come from a functional background. Pure functions/shell scripts are rather safe and also parallelize pretty well.
If part of your program does something critical it's best to move it from a set of functions into a separate shell script. This gives you the same Unix stream API but also the benefit of process-level separation of data and environment. It simply enforces the Unix byte streams as the only way to talk between the caller and the callee.
And usually you can do this with only a few lines of code. Shell scripting can be ― and is ― intuitive. This is because processing files and streams, I/O, and parsing and mangling command lines and strings of words is pretty implicit in bash, which usually is very close to what you want to achieve in the end so you get down to the actual work pretty quick.
That's probably because most people only sporadically have to write shellscripts. If you had to write it every day, I bet you wouldn't have that problem.
Having said that, I personally don't feel shellscripting is all that intuitive either.
Indeed -- that's where Perl still fits into my day-to-day toolbox. It's exactly in those 5-50-liners with a little logic that I want a language with an overabundance of syntactic shortcuts. Shell is the exact opposite of that.
> That's probably because most people only sporadically have to write shellscripts.
Does anyone have to write shellscripts? If I'm writing one and it has any degree of complexity at all -- i.e. has loops or subroutines -- I just write it in Python. The only part of bash I know is the subset I use interactively on the command line.
Agreed. The only reason I've had to learn any 'real' shell is to write portable init scripts. That being said, I'd probably rather write init scripts in ruby or python anyway.
Besides, init scripts aren't really portable (across distros) to start with, ubuntu/debian scripts rely on start-stop-daemon and a specific filesystem layout. The use of shell for init scripts is probably only useful because using dash (/bin/sh) to execute them is likely faster than using a fancier language (though I haven't benchmarked this).
BTW, ever try and write anything non-trivial using dash (POSIX shell features only). It's pretty painful. Lots of people mistakenly write shell using non-posix (bash) features, and are unpleasantly surprised when /bin/sh links to dash, thus breaking their scripts.
I've written large scripts in bash (using range ellipses, arrays, /dev/tcp, arithmetic shells, nested subshells, IFS changing & $@ to handle filenames with spaces, etc) as part of writing kickstart scripts, RPM install / update scripts, and initscripts.
I'm also familiar with object pipelining shells from using powershell.
I write maybe one or two ad-hoc shell scripts every day, many only for a single use. Bash isn't so bad once you get used to it. Things like Perl, Python or Ruby are often less suited owing to lack of parallelism or scalability without more effort than I'm willing to spend, as I typically write the scripts in 2 or 3 minutes or so. Most of my scripts are pipes of other utilities, written in a streaming style, often with multiple forks writing to temporary files and joining with a sort at the end.
I wouldn't feel the same way if it was a different sh than bash. For example, this is easy in bash:
let total=0
let n=10
for ((i=0; i<n; ++i)); do
if (( i % 2 != 0 )); then
printf 'Odd: %.3d\n' $i
fi
let total+=i
done
echo "Total is: $total, average is $((total / n))"
Staying inside bash, you only have integer arithmetic, but that's sufficient for many problems. The major advantage is that you can easily, and more importantly, readably, compose solutions using programs written in dozens of different languages. The syntax for piping programs together in scripting languages tends to be imperative, rather than declarative like a shell script.
You don't need the semicolons if you prefer newlines. e.g.
let total=0
let n=10
for ((i=0; i<n; ++i))
do
if (( i % 2 != 0 ))
then printf 'Odd: %.3d\n' $i
fi
let total+=i
done
echo "Total is: $total, average is $((total / n))"
I know zsh has extremely good manpages, even for things like control structures (try zshmisc(1)), and it describes a lot of the differences between itself and bash or restricted sh syntax.
The first example given is a perl-golf style obfuscated thingy. Does that strike anyone else as possibly the absolute worst thing you could possible do when showing someone a language for the first time and wanting to make a good impression?
The next example is a little better, I still have no idea what it means but at least it looks like I possibly might once I've read the docs (which I will do now).
Having read the tutorial it (the dining philosophers problem example) now looks quite elegant. I'm still a little unsure about exactly what latches do and when you're meant to use the -> operator though. A quick syntax summary would really help.
But that's not the first example. The first example is the clock, which the page says is "terse but complete", and which uses very few names longer than three letters and very little whitespace.
This is pretty cool. The compiler figures out the order of execution and parallelization of your code for you, based on a dependency graph that is generated from the code. So, you simply describe the behavior of your data and every description is assumed parallel.
Once you read the basic tutorial, the language is not hard to read.
Quite interesting concept, but I looked through the code repository and didn't find any code generation at all. Only a parser. I think it may be very "alpha".
True, it's not alpha yet. But a lot of the semantic processing is being put into the grammar since in ANI, once a program is grammatically sound, it's just left-to-right type validation and dumping out boilerplate assembly for the equivalent of a type-safe stack machine. So even as it stands, the compiler is doing a nearly complete job of rejecting invalid programs, which really is the bulk of the work in compiling ANI.
I disagree. The language is readable, if not intuitive. Once you read the tutorial, the examples all make sense. The intial example given is like a minified version of the code, and is probably not a good first impression.
I imagine myself having to read a couple of programs written in this language. It looks cluttered.
Which doesn't mean I think the concepts of the language are bad.
It now just looks like an explosion of \,],['s.
I wonder if this isn't just down to keyboard layouts.
US keyboards have an extra-big left shift, and a small return key, with the backslash and pipe above the return. UK (and as far as I'm aware, most other Eurpopean) keyboards have a small left shift with the backslash and pipe key just to the left of the Z. The return key is nice and big, hard to miss. The button above the return is the backspace, also nice and big and hard to miss.
(And annoyingly, every single Linux installation I have ever used has mis-mapped the Dvorak translation of the key to the left of the Z, mapping it to < and >, rather than \ and |. I often have to faff about with loadkeys layouts and xmodmap layouts for several hours every time I do a new install.)
I just want to add that the guy who designed the Finnish (same as Swedish) keyboard layout obviously held a deep grudge against c programmers -- most non-letter symbols that you need are just awful to access. Backslash is [Alt Gr] amd [+].
Yes, the finnish/swedish keyboard layout is just horrible today, when most stuff is designed to be used with US keyboard. That's why I use modified* US-intl layout on my computer even if its bit confusing to use other computers.
*added ä and ö keys, as they are needed in finnish.
anic is the reference implementation compiler for the experimental, high-performance, statically-safe, fully implicitly parallel, object-oriented, general-purpose dataflow programming language ANI.
The description sounded too over the top, too many words. Kind of like "The peoples socialist democratic republic of ..." its a pattern that rings negative alarms.
Graph-based approaches to program flow control are pretty interesting. The connecting of things to each other in a stream-like fashion is similar to Chuck, which you might enjoy.
This is very interesting, thank you. Any others like this?
I'll contribute something to you, in return: if you're interested in graph-based audio/visual programming, check out vvvv for Windows. It's like Processing, but visual (i.e., designed via UI), and very very fast. It was made by visual artists who actually use it to make big bucks, rather than computer science weenies.
Theres plenty of dataflow-type languages out there. One of the most interesting, IMHO, is Lustre, since it seems to be used heavily in the defence and aerospace industries, though its pretty tough to find any information about it (though there were quite a few code examples in a recent SIGPLAN publication).
I've been playing around with dataflow concepts myself and personally see them as the way forward in programming, though I'm not quite sure yet how it would need to be done to catch on as a mainstream language. I think I'll play with anic for a bit and see what happens :)
have you ever tried to use Processing for business? i did some visualization/art stuff that maxed out at 20-30 fps in processing, and was easier to do and ran at 100+ fps in vvvv. it's a toy.
Not really so much. You just target an area where C is not very good and you can do it. For bulk threaded stuff using lots of I/O, Java can be "faster than C." For extremely distributed code, Erlang can be "faster than C."
These claims leverage the fact that the fastest implementation in C is incredibly difficult to write. Usually we discard that difficulty when comparing languages, but as time goes on the costs of developing specific distributed, multi-threaded, and/or reliable systems in pure C becomes so outrageous that it stops making sense to ignore, so we start comparing against less correct implementations.
It's not that much of a claim, if they implicit parallelization works well. Also some functional languages are getting very nice optimizations like deforrestions / fusions that make them really fast.
Good point. Also how can you measure this speed when talking about C? First of all there's not only one single implementation of the compiler, and second thing for available architecture you may have different performances basing on the quality of the backend...
I don't know about anyone else, but when the homepage screams "portable" and I still need to add header includes to get it to compile (and the tests fail even then), I get worried.
It's portable in the sense portability concerns are at the forefront and the stable build will work with all of the major OSes; tests fail because it's simply not all implemented.
backslashes were chosen for a purely pragmatic reason; on virutally all keyboards, backslashes are very easy to type (requiring only a single keystroke). This is a handy property for backslashes to have because in ANI, you'll be typing them a lot!
Many more examples contrasted with the equivalent imperative code.
Examples with elaborate data structures.
Implementation overview (current & objectives).
Benchmarks on a multicore.
What does it matter what the compiler is written in? (Disclosure: I never checked if this compiles directly or if it uses C as an intermediary, if the latter, then Ok)
It doesn't matter, however the OP asked "Is every language claiming to be faster than C now?" which struck me as rather snarky.
The trivial answer is "Yes", but that answer raises two non-trivial and more interesting questions
1) If the language is faster than C, why write the compiler in C rather than natively compiling it (the acid test of a C compiler is if it can compile itself).
2) What computer languages are used to write computer languages?
Well, once the anic compiler is complete, it would perhaps be a good idea to rewrite it in ANI itself and have it be self hosting, but to get to that stage, you have to write it in another language first. What language? Any at all - whatever you're most familiar in is probably a good choice (and probably why a lot of them are written in C).
I agree. Though your grandfather comment still confuses me.
If you are implementing a language (for which you also have some say in its design) and your compiler is too slow, then I would start e.g. by investigating caching strategies instead of switching implementation languages.
Sure. I would personally only ever switch once, if at all, and that is to use the language itself so that its self hosted. I wouldn't switch for speed reasons, unless the performance is really really really bad and I can't seem to fix it.
Oh, it might be worth to switch from e.g. C to Haskell for the compiler. Especially if your language isn't that well-suited for compiler work, you won't get much out of self-hosting. Paul Graham wrote a bit on this when justifying why Arc won't be self-hosting.
I don't really get it. The speed of your compilation target may matter --- but the speed of your compiler is completely irrelevant to the speed of your language. (This is why native compilers are so fast: they compile to assemly.)
While I agree that the speed of the output executable is more important than the speed of the compiler, the speed of the compiler is also important. If the compiler is slow, it waste lots of developers' time.
When a developer has to wait for the compiler, he not only is waiting, but there are also secondary effects: he loses concentration, browses HN, talks to coworkers, etc. http://xkcd.com/303/
Probably the most the revolutionary thing about the Borland compilers, for instance, was that they compiled instantly compared to their competition.
Also, self-hosting your compiler in itself can provide a great performance suite; compilers can be fairly complex things and they do memory and cpu-intensive work, so the speed with which compilers compile themselves is a useful benchmark.
>> "Think of ANI as a way to write fast, statically-guaranteed safe, well-structured, and highly parallelized software without ever encountering memory management, threads, locks, semaphores, critical sections, race conditions, or deadlock."
You can do that in pretty much any language if you decide to. I'm not seeing the point here, and the syntax is just horrible.
Am I the only one who still has to google most of 'sh's predicates and syntax?