Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Evil ML: ML to C++ template language (akabe.github.io)
66 points by ingve on Aug 13, 2015 | hide | past | favorite | 22 comments


Neat hack :) Actually whenever I need to do some C++ template metaprogramming I first imagine a functional program that would perform the task and then translate it into templates. Turns out it is much easier than starting with templates directly!


That is a neat hack - I will try this out soon.


If ML and C++ excites you, don't miss out on Felix. http://felix-Lang.org You will get more out of it if ML excites you more than C++, rather it is the possibility of talking to C++ with minimal or no glue that excites you.

It wont be totally wrong to say that it is to C++ what F# is to C#, or perhaps little more accurately Scala to Java.

It is plenty fast too. A toy brainfuck interpreter written in it turned out to be speedier than C++, D, Go, but got beaten by an interpreter written in nim. Can post the link once I get off the phone.

EDIT: Here it is https://github.com/kostya/benchmarks/blob/master/README.md#b...

Something I find interesting is that it has had fibres and coroutines long before go was a thing. The author(not me) used it in his projects on telco switches.


Benchmark is rubbish. The Nim version is using a standard nim 'table' for the bracket_map, which is a hash table. The C++ version is using std::map which is a red-black tree.

A quick once-over with gprof shows the C++ interpreter running bench.b shows it is spending 30% of its time allocating rb-tree nodes and another 30% searching it.

Swapping it out for a unordered_map (a hash table) actually makes it slower, but ~40% of the runtime is then sunk in to inserting in to the hash table. Probably allocating

I couldn't be bothered to drill in any further.


Just so that you know neither the Felix author nor me are involved in the benchmark. More tellingly the benchmarked codes were user contributed. Regardless I did not get your point, just in case you had any. Different implementations used their idiomatic data structures and got what they got. You ranted about the choice and tried an alternative and it turned out to be slower and dismissed it entirely saying none of this is worth your time. And you said all these because ...? There is probably a reason and an underlying conclusion you wanted to convey, but frankly it escapes me so I couldn't be bothered with following up further on a middle brow dismissal either.


An interesting language. I'd like it if their performance charts had numbers on them instead of just the shapes. Leaves me guessing more than knowing.


Performance charts on the website ?

I had no idea even those where there on the website :) Could you post the link please. BTW the website is running on web-server written in Felix and the entire live install image can be browsed online.



That's the shootout benchmark codes, right ? Are you seeing charts there too ?! The website caches aggressively so it could be that I am seeing stale data. Let me check.

Edit: Ah! they are here http://felix-lang.org/speed/ in their individual directories


Im seeing charts which lack the data saying what they mean. Most others have numbers, indicate higher/lower is better for given benchmark, etc. That's my gripe.


Oh I understood your gripe from the start. My confusion was that I couldn't locate the plots you were referring to. In any case these are not there to showcase Felix just a sanity check in the build process


Thanks for clarifying. That makes sense. I think they should do a showcase, though, as performance is a prime concern in language comparisons.


Indeed. Off topic: Given the description of your lisp tool I think you will find Clasp interesting unless you are the Clasp author :)

https://github.com/drmeister/clasp


You're like the third person that's suggested that and joked about me being the author. I've been really crunched for time but I plan to get back into LISP soon. Might go ahead and add Clasp to my todo list for that haha.

Anyway, I was mainly considering Racket for its powerful metaprogramming abilities, IDE, and their constant optimization of compiler. The enhanced metaprogramming should nicely help with a new instant of my platform given what I did with CL's macros & meta. Thoughts?


wrt felix:

"Chat and Discussion: Facebook Group Felix Programming Language"

This is acceptable for some ad hoc NGO initiative but it is ridiculous for a programming language project.


Most of the discussion is on google groups. Btw I think google groups is one of their shittiest services. They bought all the Usenet data, ran competition to the ground and then put the most ridiculously inefficient and annoying interface on top.


That's pretty cool. A long time ago, I had a mock-up of C/C++ in LISP that let me use LISP macro's instead of C++'s metaprogramming. A generator produced C/C++ from it when I was done with the algorithm. Much automation of boilerplate, portability, security, etc w/ some 4GL-style stuff for productivity and LISP's incremental (per function) compilation. Miss that tool.

Anyway, just mentioning it because it's best of two worlds and someone else might want to re-create the concept. I saw one person do something similar for C just for interactive development. However, my old toolkit showed it could do a lot more. Some bright people could probably take it way further than what I did.


Nice but I will nitpick a little:

- double underscores at the beginning of names are reserved for compiler/STL.

- Using a namespace instead of "__ml_" seems more appropriate.


Looks neat, but I can't get it to compile. I get hundreds of warnings about interfaces existing in multiple locations, and then finally an error that module Location is defined in src/location.cmx and ocamlcommon.cmxa.


Oh god, I don't even want to think about the compile times.


In my LISP/C/C++ system I referenced, I was able to largely eliminate compile times by doing the development in LISP until it was correct and batching overnight an extraction to C/C++ w/ compilation and tests. So, if it's done right, it should be faster to iterate than a C++ tool even on older hardware. Don't do a lot of work with the ML's or this tool so I can't remember if they have the interactive option or at a good speed.


Then just think of the run times instead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: