This language was written by a professor of mine Ge Wang, also the founder of Smule.
He likes to refer to it as "strongly timed", and I think it is a really accurate description. While the language does have limitations, it can also be beautifully expressive for certain things.
If anyone has any question for the author please let me know and I'll either get him to come in here or be a surrogate and answer directly
I don't have the time to investigate the language, but I wonder if its semantics borrows from synchronous programming languages such as Lustre [1], Esterel [2], or Signal [3]? Those languages were designed to program discrete controllers and embedded real time systems, and they have a very strong notion of timing and scheduling (via typing) in their semantics which allows to prove functional properties such as accessibility of states, liveness, time and memory bounds, etc. using formal methods such as model checking.
When I was doing my master thesis on this domain, another student in the lab (who was from an IRCAM [4] master degree) worked on the use of synchronous languages to follow a musician who plays at arbitrary speed a given music score (or something along those lines). It was quite fun!
I've completed the ChucK course on Coursera. As far as I can say, it doesn't borrow semantics directly from synchronous programming languages though it may parallel them. Syntactically it's similar too Java/C++.
Time/Duration are data types, and you must advance "now" to generate samples and events.
that's such a nice term, "strongly timed". It seems like the correct term to use for programming languages in which time is reified and in which one can make declarative statements about time and events.
Another system for live interaction with music is Extempore [0], for which you can code in either Scheme or xt-lang. There are some very impressive videos of it in action [1]. Extempore is a re-designed version of Impromptu [2] which was somewhat more restrictive in terms of platforms and editor integration.
I don't understand ChucK or Extempore well enough at a lower level (yet) to provide a good comparison, but I'm just happy to see people introduced to any music hacking tools. :)
ChucK is old news in electronic music circles. SuperCollider (http://supercollider.sourceforge.net/) is (I believe) the most popular and well-maintained of these languages.
Overtone is just a clojure wrapper around the SuperCollider application. (It's pretty neat too!)
I very briefly worked on ChucK while at Princeton years ago, although I never finished my contribution to be an official committer. Having spent a decent amount of time with it, I think the biggest problem with ChucK is lack of flexibility within the strongly-timed model. The ChucK operator, `=>` is used to connect together unit generators, but as I recall, you can't use it to connect arbitrary ports of those unit generators together. Each has only one designated ChucK input and/or output. This makes it difficult to truly experiment with modulation. You pretty quickly reach a point where you end up writing the same old procedural code you'd write in C or Java, explicitly manipulating samples instead of streams.
There are many other systems that are similar, including SuperCollider, Max/MSP, PureData, CSound and JSyn/JMSL.
I think the intuition behind ChucK is similar to the functional reactive programming of Elm [1]. But it's doesn't seem as though it's thought completely through. It would be cool to see a true FRP implementation for sound.
I've been working on building a set of basic music patches simultaneously in ChucK, PureData, SuperCollider and Pyo. The idea is to produce patches which produce the identical sound, to illustrate the differences between the music languages.
I have a long background as a C programmer. Because ChucK is C-like, I have had the least trouble learning it, of the 4 languages mentioned above. The simple stuff I implement in ChucK tends to work the first time. However, my musician friends who are not C programmers tend to find visual patching languages (like PureData and Max) far more intuitive.
Performance-wise, ChucK seems to me to be the least performant of the 4 mentioned above, however, for my needs it works fine, and I look forward to doing more work with it.
> you can't use it to connect arbitrary ports of those unit generators together. Each has only one designated ChucK input and/or output. This makes it difficult to truly experiment with modulation
You can route things to multiple destinations with multiple ChucK operators.
You can also route to specific inputs by naming them.
Arbitrary routings are neither impossible nor a problem.
SinOsc A => dac.left; // Here, A's main output connects to a specific input of object dac.
SinOsc B => dac;
A => B.freq; // Here we see the same A having another output connection, and connecting to a specific input of another object to create FM modulation.
NOTE: please ignore above post. I shouldn't post before coffee, or without checking code. Routing A's audio output to B's frequency control output does require doing it in a loop since A's output is not compatible with freq's input. Have to do: A.last() => B.freq, and last() is the current sample, so needs to be called in a loop to work, which is what AC was saying - parameter modulation ends up being fiddling with individual samples.
ChucK's pretty fun, I've played around with it a little to write a couple of electronic demos... I like the fact that i can do full-on generative synthesis with it - samples can be used, but it is more fun for me to craft sounds algorithmically.
This one's a laid-back idea for a 'chillout' song I haven't finished:
i'm not really sure ChucK gets a lot of real use. it seems libraries like overtone have surpassed it in popularity, i assume because of the language accessibility.
it has some really interesting concepts built into the language, like time. but in my experience this doesn't provide huge value for the purpose of composition, especially since when you are writing code livecoding is typically loop based.
i'd love a ChucK VM for the web audio API. on the back burner; things i should do but won't.
I took a course in ChucK at university. It was such an awful way to make music that all but three or four of around 20 people in the class dropped out.
Interestingly enough, I have the opposite mindset. I never learned to play any instruments (although I would still like to buy a keyboard/synth one day), but I came across a reference to music programming the other day, which led me to ChucK. I'm intrigued by the idea of learning this (or one of the similar products) as a way to explore creating music without needing to learn to play guitar or whatever.
I thought the same many years ago and spent lots of time with trackers and more programmatic methods of music production.
Then I bought a hefty synth and after about 6 months of sounding like a tramp bashing a trash can lid, I finally clicked and it's an extension of my mind now, much like whistling a tune or tapping one out on your knee with your hand.
Very cool. I do still want to buy a synth/keyboard eventually, although I'm afraid that I'll wind up buying one then never find/make time to actually invest in learning to play. :-)
The Smule music apps (Ocarina, Leaf Trombone, etc.) are all written using ChucK, IIRC.
Outside of that, you're right -- it's not widely used -- although, at CalArts (where I currently go to college, http://www.calarts.edu), it's extremely popular in the Music Technology department. We use it mainly for handling MIDI->OSC translation to control robotic instruments (http://www.karmetik.com/media) and interfacing other electronics, like custom built musical interfaces and controllers, with music software.
What makes it so awesome is really the way it handles concurrency and automagically synching a bunch of different "shreds" at the same time. Its deeper than say the web audio api. Where it is lacking is in mobile support. The ancient Csound has an android version that actually enables you to develop the live code on your android device. While with Smule you had a cool adaptation for ios, Csound still seems to be more forward thinking in that it already supports android and chromeos.
It's tough to make a direct comparison. They're very different systems. Here are a couple thoughts:
- From what I recall, ChucK doesn't really have any equivalent to CSound's score -- you just procedurally schedule changes. If you've every seen JSyn [1], I think ChucK is roughly equivalent to it, but with the language adapted to be more music centric than Java. JMSL [2] is roughly a Java equivalent to CSound's score file, and I don't know of anything in ChucK that fills that role.
- CSound is not really built for modularity or flexible routing between components. ChucK is better in that regard.
- CSound also has the benefit of a couple decades worth of usage and all sorts of interesting corners to its ecosystem.
I'm sorry if this isn't a very coherent comparison. I haven't used CSound in a few years, and ChucK in even longer. If you're doing classical music, I imagine you'll value CSound's scoring capabilities over ChucK's synthesis flexibility.
Thanks ! I didn't know there was a new version. The lack of stereo UGens brought me to Super Collider since then but I like the Chuck coding style. Cheers
He likes to refer to it as "strongly timed", and I think it is a really accurate description. While the language does have limitations, it can also be beautifully expressive for certain things.
If anyone has any question for the author please let me know and I'll either get him to come in here or be a surrogate and answer directly