Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What is your use case?

Have you looked at Numba? There are guarantees about memory that can be made with Fortran that can't be expressed with C. Numba is trying to bring the same type of optimizations to NumPy python code, because it does know some of the memory constraints.

Note: I work for Continuum Analytics.



So I watched the numba talk at pycon. What I still don't understand is does it speed up any python code or only code that uses numpy? How does it know if you're using numpy?


it's a numpy-aware compiler - you either tell the jit decorator the types of the arguments the function will be called with, and that information is used in the compilation. It doesn't have to be NumPy arrays, but the type declaration mechanism does know about them and can optimize around that. Similar to how providing type information can allow cython to provide efficient C code, providing type information on the decorator allows Numba to generate efficient llvm byte code.

There is also an autojit decorator that watches what you're calling the function with, and compiles it for the given type signature.


I've got some numerical analysis code in f77 that leverages lapack. I tried translating to C but was never happy with using clapack or atlas.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: